All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an alert that runs every hour, triggered when the number of results is greater than 0, for reach result.  I have the throttle option checked, and "suppress results containing field value" set... See more...
I have an alert that runs every hour, triggered when the number of results is greater than 0, for reach result.  I have the throttle option checked, and "suppress results containing field value" set to "myData.message" (this is an error message I want alerts for). The action is a Slack message. The result of this alert is one Slack message for each unique error message found in the past hour. I want to know if there's a way to get the count of the specific/unique error message that it's firing for. $job.resultCount$ gives me the count of all error messages found, so that doesn't work for my use case.
I want to send indexed data to another server but I'm running into an error of unable to create/find path. Q: Is this a permissions issue? Q: Maybe this is a syntax error? ….. Any advice would be ... See more...
I want to send indexed data to another server but I'm running into an error of unable to create/find path. Q: Is this a permissions issue? Q: Maybe this is a syntax error? ….. Any advice would be helpful. Thank you! Info: - Windows 2016 environment - I have two servers setup with UNC paths of \\server01\hotwarmstorage and \\server02\coldstorage that use a service account credential (ie svcSplunk) to gain access. - Splunk is installed using the SYSTEM account. - I've tried to use the UNC path and also mapped the storage drives to Y: and Z: on the indexers and master - While on the indexer and in CMD I can do y: to access the network path Errors: 1) Failed to create directory 'Y:\hotwarmstorage\index-test\db' (The system cannot find the path specified.); 2) \\server01\hotwarmstorage\index-test\db' (The specified path is invalid) 3) I've tried a non-credentialed network path  \\server03\splunkstorage and I get an error '\\server03\splunkstorage\index-test\db' (Cannot create a file when that file already exists.); Master indexes.conf attempts Attempt 1): [volume:seam_test_hotwarm] path = Y:\hotwarmstorage Attempt 2): [volume:seam_test_hotwarm] path = \\server01\hotwarmstorage Index -  indexes.conf: [index-test] repFactor = 0 homePath = volume:seam_test_hotwarm/index-test/db  
After Smartstore was enabled for deployment the indexer's log's are flooded with messages like "INFO CacheManagerHandler - cache_id="ra|tto_uswest2_tomcatfrontend~39~4345D76C-80D6-4BC7-991F-EA835C... See more...
After Smartstore was enabled for deployment the indexer's log's are flooded with messages like "INFO CacheManagerHandler - cache_id="ra|tto_uswest2_tomcatfrontend~39~4345D76C-80D6-4BC7-991F-EA835C2B892C|08281223-D92B-4A36-BCA0-83970376D322_tto_search_agupta13_NS2480590abee10f99" not found cache_id = ra|tto_uswest2_tomcatfrontend~39~4345D76C-80D6-4BC7-991F-EA835C2B892C|" What the best way to find teh bucket corresponding to report acceleration.
Hello! I'm trying to do a charting task from a lookup table and can't seem to nail down a solution. This is the essentially format of my data in the csv. Status Opened Date Closed Date Open... See more...
Hello! I'm trying to do a charting task from a lookup table and can't seem to nail down a solution. This is the essentially format of my data in the csv. Status Opened Date Closed Date Open 4/3/2020 TBD Closed 4/3/2020 9/10/2020 Open 4/3/2020 TBD Open 4/3/2020 TBD Closed 5/6/2020 7/4/2020 Open 8/6/2020 TBD   Essentially what I would like to do is create a line chart that illustrates the events status over its time frame. Below would be an example of the above data. Any help to this would be extremely appreciated!  
Hi I'm new to splunk and hope you guys are having a good day! How can I query and extract out the information from this event field? Example I would like to the object value name and the chang... See more...
Hi I'm new to splunk and hope you guys are having a good day! How can I query and extract out the information from this event field? Example I would like to the object value name and the change information. From there i'll create a column and display it values extracted. I feel that the windows log itself is quite difficult to search due to the limited number of fields Thank you in advance
I have a splunk deployment server who needs to send app changes out to different servers with forwarders running in different time zones.  The timezone difference causes the changes not to be pushed ... See more...
I have a splunk deployment server who needs to send app changes out to different servers with forwarders running in different time zones.  The timezone difference causes the changes not to be pushed because the file dates are in the future compared with the deployment server.  Any suggestions?
I am using this body: {"time": "", "event":{"hello": "world"}} postman Uri: "https://localhost:8088/services/collector" local Splunk instance: "http://localhost:8000/en-GB/manager/search/http-even... See more...
I am using this body: {"time": "", "event":{"hello": "world"}} postman Uri: "https://localhost:8088/services/collector" local Splunk instance: "http://localhost:8000/en-GB/manager/search/http-eventcollector"   But I'm getting  {     "text": "No data",     "code": 5 }   I am using Splunk 8.0.6, Please help!  
Here is my problem statement:  1st Query: index=test "TestRequest" | dedup _time | rex field=_raw "Price\":(?<price>.*?)," | rex field=_raw REQUEST-ID=(?<REQID>.*?)\s | rex field=_raw "Amount\":(... See more...
Here is my problem statement:  1st Query: index=test "TestRequest" | dedup _time | rex field=_raw "Price\":(?<price>.*?)," | rex field=_raw REQUEST-ID=(?<REQID>.*?)\s | rex field=_raw "Amount\":(?<amount>.*?)}," | rex field=_raw "ItemId\":\"(?<itemId>.*?)\"}" | eval discount=round(exact(price-amount),2) , percent=(discount/price)*100 , time=strftime(_time, "%m-%d-%y %H:%M:%S") | stats list(time) as Time list(itemId) as "Item" list(REQID) as X-REQUEST-ID list(price) as "Original Price" list(amount) as "Test Price" list(discount) as "Dollar Discount" list(percent) as "Percent Override" by _time | join X-REQUEST-ID [search index=test "UserId=" | rex field=_raw UserId=(?<userId>.*?)# | dedup userId | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time] Sample Output: Time User Id Item X-REQUEST-ID Original Price Test Price Dollar Discount Percent Override 1           1             1               1                          1                         1                      1                                1 2           2             2               2                          2                         2                      2                                2 3           3             3               3                          3                         3                      3                                3 4           4             4               4                          4                         4                      4                                4 5           5             5               5                          5                         5                      5                                5 2nd Query: search index=test "Remove Completed for" | rex field=_raw UserId=(?<userId>.*?)# | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time Sample Output: User Id 4 3rd Query: search index=test "Clear Completed for" | rex field=_raw UserId=(?<userId>.*?)# | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time Sample Output: User Id 5 I want the final output as Time UserId Item X-REQUEST-ID Original Price Test Price Dollar Discount Percent Override 1           1             1               1                          1                         1                      1                                1 2           2             2               2                          2                         2                      2                                2 3           3             3               3                          3                         3                      3                                3 The above output is excluding the results of 2nd Query and 3rd Query from main search query result (1st Query) based on the field value of "User Id". So if  "User Id" found in 1st Query also found in either 2nd Query and 3rd Query then exclude that "User Id" row from main result 1st Query.
I have 2 search queries one is main and the other one is a subquery and i need to find the count difference between both the searches 
Is it possible to rollback changes if we face some issue upon Splunk enterprise version upgrade?
After a hardware failure was resolved, I attempted to start splunk again...but I am now getting this error "The index processor has paused data flow. Current free disk space on partition '/' has fal... See more...
After a hardware failure was resolved, I attempted to start splunk again...but I am now getting this error "The index processor has paused data flow. Current free disk space on partition '/' has fallen to 158MB, below the minimum of 5000MB. Data writes to index path '/data1/splunk/indexes/audit/db'cannot safely proceed. Increase free disk space on partition '/' by removing or relocating data." I understand what is saying, but the odd part is that partition "/" never had that much space and all other indexers are configured the same with no issues. What am I missing here?
I added a third Index to my Cluster Master How do I tell my forwarders to send data to the new index or how my forwarders know about the new index thank you
So while it might be a no-brainer if we're enterprise users in AWS managing all on our own, to deploy with the Enterprise AMI (8.0.6) There is a cost associated with running this instance other than... See more...
So while it might be a no-brainer if we're enterprise users in AWS managing all on our own, to deploy with the Enterprise AMI (8.0.6) There is a cost associated with running this instance other than our current license (we would be BYOL). So what are the advantages of a BYOL AMI deployment as compared to a self-managed instance? some thoughts: a) timeliness of patch awareness b) certified OS platform for upgrades c) standardized update process for keeping patch aware? d) did I misunderstand the pricing charts? Is an AMI deployment with BYOL not incur any additional costs from Splunk? As in: the costs detailed on the marketplace pages are just AWS billed costs?
I have search like below to show me 'src_ip' and 'count' every last 10 min index="pan" sourcetype="pan:threat" earliest=-10m action=allowed NOT [| inputlookup Exclusions | fields src_ip] | stats cou... See more...
I have search like below to show me 'src_ip' and 'count' every last 10 min index="pan" sourcetype="pan:threat" earliest=-10m action=allowed NOT [| inputlookup Exclusions | fields src_ip] | stats count by src_ip| where count > 10 | sort - count 1.1.1.1 10 2.2.2.2 12 and second search to show only src_ip in last 24h (to eliminate src_ip repeated in any of 10 min periods for last 24h) index="pan" sourcetype="pan:threat" earliest =-24h action=allowed NOT [| inputlookup Exclusions ] | bin _time span=10m | stats count by _time src_ip | where count > 10 | stats count by src_ip | where count = 1 | fields - count but combined search to show my only src_ip with count where src_ip is present in subsearch is not working correctly .. because src_ip values are not unique in subsequent 10 min interval index="pan" sourcetype="pan:threat" earliest=-10m action=allowed NOT [| inputlookup Exclusions | fields src_ip] | stats count by src_ip| where count > 10 | sort - count IN [index="pan" sourcetype="pan:threat" earliest =-24h action=allowed NOT [| inputlookup Exclusions ] | bin _time span=10m | stats count by _time src_ip | where count > 10 | stats count by src_ip | where count = 1 | fields - count]      
I want to extend the results of the first search : add the column category  (from the 2 search) to the results of the 1 search. The results of the first search appear: The results of the 2 sear... See more...
I want to extend the results of the first search : add the column category  (from the 2 search) to the results of the 1 search. The results of the first search appear: The results of the 2 search are also present:  2 datasets have one common field dns_query  . But using join command no matches are found in these 2 datasets are found (it`s impossible, because I checked some of the dns_query )         Any ideas what can be wrong?
With this query I can see the notable events that are currently active. But not everyone has been alerted even if they are active. I would like to know what the query would be to see those that the... See more...
With this query I can see the notable events that are currently active. But not everyone has been alerted even if they are active. I would like to know what the query would be to see those that the tool has alerted in the last month   | rest splunk_server=local count=0 /services/saved/searches |search action.notable.param.severity=* | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | where disabled=0 | rename eai:acl.app as app, title as csearch_name, action.correlationsearch.label as csearch_label, action.notable.param.security_domain as security_domain | table csearch_name, csearch_label, app, security_domain, description action.notable.param.severity
Hello,  Need some help with the below. We have multiple entries for a single IP that has multiple results as the Status Field - I want to know 'How many Hosts Passed, How many Failed and how any wer... See more...
Hello,  Need some help with the below. We have multiple entries for a single IP that has multiple results as the Status Field - I want to know 'How many Hosts Passed, How many Failed and how any were Not Attempted' IP Status 10.50.50.50 Passed 10.50.50.50 Failed 10.50.50.50 Not Attempted 10.60.60.60 Passed 10.60.60.60 Failed 10.70.70.70 Passed   If I simply do Stats count by Status, i get the below: Passed: 3 Failed: 2 Not Attempted: 1 But i know there are only 3 IP's so i need a way to know if a host Passed once - Mark it as Passed only Count if an IP has a Status of Passed, mark it as 'Passed'  If an IP has a status of Failed and Failed only, then count it as failed If an IP has a status of Not Attempted and Not Attempted only then mark it as Not Attempted    So the output should be the same as the below (Because once an IP has a 'Passed', it shouldn't count towards the other values): Passed: 3 Failed: 0 Not Attempted: 0 Hope the above makes sense and appreciate the help!
I have a very simple dynamic dropdown that lists computers by their FQDN. I have one panel that can use that token value "Failures" and I have one panel that needs the domain name stripped away "Erro... See more...
I have a very simple dynamic dropdown that lists computers by their FQDN. I have one panel that can use that token value "Failures" and I have one panel that needs the domain name stripped away "Errors." Is there an easy way to do this?     <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="TheName"> <label>Computers:</label> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>index="bfront" source="/var/log/audit/audit.log" | dedup host | table host | sort by host</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="time" token="TheTime" depends="$TheName$"> <label>Time:</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel depends="$TheName$"> <title>Failures</title> <chart> <search> <query>index="bfront" sourcetype="linux_audit" host="$TheName$" type=USER_LOGIN res=failed | top limit=10 acct</query> <earliest>$TheTime.earliest$</earliest> <latest>$TheTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">bar</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.placement">none</option> <option name="charting.seriesColors">["0xf8be34","0xf8be34","0xf8be34","0xf8be34","0xf8be34"]</option> <option name="height">260</option> <option name="refresh.display">none</option> </chart> </panel> <panel depends="$TheName$"> <title>Errors</title> <chart> <search> <query> index="bront" source="/var/log/messages" host="$TheName$" eventtype=err0r | top limit=20 process</query> <earliest>$TheTime.earliest$</earliest> <latest>$TheTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">all</option> <option name="height">270</option> <option name="refresh.display">none</option> </chart> </panel> </row>  
I have a search: index=storage_summary sourcetype="isilon:quota"| eval Usage_GB=round('usage.logical'/1024/1024/1024,0) | delta Usage_GB as delta | eval change = Usage_GB - delta | timechart span=1w... See more...
I have a search: index=storage_summary sourcetype="isilon:quota"| eval Usage_GB=round('usage.logical'/1024/1024/1024,0) | delta Usage_GB as delta | eval change = Usage_GB - delta | timechart span=1week values(Usage_GB) values(change) by path where count in top400 At this point I get an output like this: And the change value is this: I need to do some diff on each of these columns (there are a lot) to see the change from one weekly value to the next.  For instance the path 3DDental changed from 227GB to 233GB.  The change value isn't right for each consecutive week for each path.  For instance, the 3DDental path, the change values should be 1,1,1,1,3 for each week interval.  3DMD didn't change so those values should be 0 for each week interval. Is delta not the right command to use?
I have search result like below with repeating values in 'src _ip' field and looking to count occurrences of field values  10.1.8.5          3 10.3.20.63     1