All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Build Query to Show history of alert management to include Analyst Name, Status, Time in Analysts' queue -  Hello, we are trying to pinpoint with a report or a simple query how long each analyst r... See more...
Build Query to Show history of alert management to include Analyst Name, Status, Time in Analysts' queue -  Hello, we are trying to pinpoint with a report or a simple query how long each analyst retains an alert in their queue.  It will help us with managing alerts more efficiently/determine bottlenecks in our process. It should be able to be displayed in a table if possible. Thank you, in advance.
Created an answer with workaround for the xpath and prolog header line issue here:  https://community.splunk.com/t5/Splunk-Search/The-xpath-command-does-not-work-with-XML-prolog-header-lines-e-g/td-... See more...
Created an answer with workaround for the xpath and prolog header line issue here:  https://community.splunk.com/t5/Splunk-Search/The-xpath-command-does-not-work-with-XML-prolog-header-lines-e-g/td-p/711425
splunkd.log has errors about BTree. I get about 10 messages a second logged in the splunkd.log   ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 record... See more...
splunkd.log has errors about BTree. I get about 10 messages a second logged in the splunkd.log   ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 recordsize=166182200, (Internal) ERROR BTreeCP [1001653 IndexerTPoolWorker-3] - addUpdate CheckValidException caught: BTree::Exception: Validation failed in checkpoint   I have noticed the btree_index.dat and btree_records.dat are re-created every few seconds. They appear to be copying into the corrupt directory.  I have tried to shutdown splunk and copy snapshot files over, but when I restart splunk they are overwritten and we start the whole loop of files getting created and then copied to corrupt.   I ran a btprobe on the splunk_private_db fishbucket and the output was   no root in /opt/splunk/data/fishbucket/splunk_private_db/btree_index.dat with non-empty recordFile /opt/splunk/data/fishbucket/splunk_private_db/btree_records.dat recovered key: 0xd3e9c1eb89bdbf3e | sptr=1207 Exception thrown: BTree::Exception: called debug on btree that isn't open!   It is totally possible there is some corruption somewhere. We did have a filesystem issue a while back. I had to do a fsck and there were a few files that I removed.  As far as data I can't seem to find out where the problem might be.  In splunk search I appear to have incomplete data in the _internal index. I can't view licensing and Data Quality are empty and have no data.   Any ideas on where to look next?   Currently LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4.0 Build: 6b4ebe426ca6
To workaround this issue, remove the valid XML prolog headers from the event before calling the xpath command, or use the spath command instead.  Here is a run anywhere example. | makeresults | eval... See more...
To workaround this issue, remove the valid XML prolog headers from the event before calling the xpath command, or use the spath command instead.  Here is a run anywhere example. | makeresults | eval _raw="<?xml version\"1.0\"?> <Event> <System> <Provider Name='ABC'/> </System> </Event> <!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"> <Event> <System> <Provider Name='EFG'/> </System> </Event> <?xml version\"1.0\"?> <!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"> <Event> <System> <Provider Name='HIJ'/> </System> </Event>" | eval xml=replace(_raw, "<(\?xml|!DOCTYPE).+?>[\r\n]*", "") | xpath field=_raw outfield=raw_provider_name_attr "//Provider/@Name" | xpath field=xml outfield=xml_provider_name_attr "//Provider/@Name" | spath output=spath_provider_name_attr Event.System{2}.Provider{@Name} | table _raw raw_provider_name_attr xml* spath*  
The xpath command does not work if the XML event contains valid prolog header lines (https://www.w3schools.com/xml/xml_syntax.asp). For example, this works   | makeresults | eval _raw="<Event> ... See more...
The xpath command does not work if the XML event contains valid prolog header lines (https://www.w3schools.com/xml/xml_syntax.asp). For example, this works   | makeresults | eval _raw="<Event> <System> <Provider Name='ABC'/> </System> </Event>" | xpath field=_raw outfield=raw_provider_name_attr "//Provider/@Name" | table _raw raw_provider_name_attr    but, add a prolog header and it will no longer work ...   | makeresults | eval _raw="<?xml version=\"1.0\?> <Event> <System> <Provider Name='ABC'/> </System> </Event>" | xpath field=_raw outfield=raw_provider_name_attr "//Provider/@Name" | table _raw raw_provider_name_attr   I've raised a support case with Splunk about this.
I am want to get the list of dashboard which is not used by anyone for more than 90 days. i have tired to use the below query but didnt work well.  | rest splunk_server=local "/servicesNS/-/-/data/u... See more...
I am want to get the list of dashboard which is not used by anyone for more than 90 days. i have tired to use the below query but didnt work well.  | rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard | fields dashboard | eval accessed=0 | search NOT [ search index=_internal sourcetype=splunkd_ui_access earliest=-90d@d | rex field=uri "/app/[^/]+/(?<dashboard>[^?/\s]+)" | search NOT dashboard IN ("search", "home", "alert", "lookup_edit", "@go", "data_lab", "dataset", "datasets", "alerts", "dashboards", "reports") | stats count as accessed by dashboard | fields dashboard, accessed ] | stats sum(accessed) as total_accessed by dashboard | where total_accessed=0 | table dashboard
You could set up daily summaries to a summary index and then run your queries over those. You might also find better performance using stats count by dest, src rather than dedup.
The following run anywhere search demonstrates how to use local-name() notation with the xpath command to extract field values (note, the xmlkv command works well, but not on node attribute values, e... See more...
The following run anywhere search demonstrates how to use local-name() notation with the xpath command to extract field values (note, the xmlkv command works well, but not on node attribute values, e.g. Name=<value> used in the example below. | makeresults | eval _raw="<Event> <System> <Provider Name='A'/> </System> </Event> <Event xmlns='nameSpace'> <System xmlns='anotherNameSpace'> <Provider Name='B'/> </System> </Event> <Event xmlns='nameSpace'> <System a='attribute'> <Provider Name='C'/> </System> </Event> <e:Event xmlns:e='prefixed/nameSpace'> <s:System xmlns:s='moreNameSpace'> <Provider Name='D'>X</Provider> <Provider Name='E'>Z</Provider> </s:System> </e:Event>" ``` examples of using xpath with XML that contains namespace declarations ``` | xpath outfield=name_no_ns "//Provider/@Name" | xpath outfield=name_with_ns1 "//*[local-name()='Provider']/@Name" | xpath outfield=name_with_ns2 "./*/*[local-name()='System'][@a='attribute']/*[local-name()='Provider']/@Name" | xpath outfield=name_with_ns3 "/*[name()='e:Event' and namespace-uri()='prefixed/nameSpace']/*[name()='s:System']/*[name()='Provider']/@Name" | xpath outfield=value_with_ns1 "/*[name()='e:Event' and namespace-uri()='prefixed/nameSpace']/*[name()='s:System']/*[name()='Provider']" | xpath outfield=value_with_ns2 "/*[name()='e:Event' and namespace-uri()='prefixed/nameSpace']/*[name()='s:System']/*[name()='Provider'][@Name='E']" ``` spath also provides another method to extract XML values ``` | spath output=spath_attribute path=Event.System{2}.Provider{@Name} | spath output=spath_value path=e:Event.s:System.Provider I raised this with the Splunk documentation team and hopefully they'll add an extended example like the one above to demonstrate namespace support when using xpath. One last thing, there is currently a bug in the xpath command and if the XML has prolog declarations (e.g. <?xml version=1.0?> or <!DOCTYPE ....> then xpath does not work.  I've raised a support case about this.  A workaround is modifying the event and removing the prolog declarations, or using spath command instead. Hope this helps anyone else who experiences this issue.
Hi @robertlynch2020  Is this what you are after? Ive loaded in your sample event to start with but you can replace this with the search for your events!   | makeresults | eval _raw="{\"resourceSpa... See more...
Hi @robertlynch2020  Is this what you are after? Ive loaded in your sample event to start with but you can replace this with the search for your events!   | makeresults | eval _raw="{\"resourceSpans\":[{\"resource\":{\"attributes\":[{\"key\":\"telemetry.sdk.version\",\"value\":{\"stringValue\":\"1.12.0\"}},{\"key\":\"telemetry.sdk.name\",\"value\":{\"stringValue\":\"opentelemetry\"}},{\"key\":\"telemetry.sdk.language\",\"value\":{\"stringValue\":\"cpp\"}},{\"key\":\"service.instance.id\",\"value\":{\"stringValue\":\"00vptl2h\"}},{\"key\":\"service.namespace\",\"value\":{\"stringValue\":\"MXMARKETRISK.SERVICE\"}},{\"key\":\"service.name\",\"value\":{\"stringValue\":\"MXMARKETRISK.ENGINE.MX\"}}]},\"scopeSpans\":[{\"scope\":{\"name\":\"murex::tracing_backend::otel\",\"version\":\"v1\"},\"spans\":[{\"traceId\":\"cff762901d1eff01766119738a9218e2\",\"spanId\":\"71d94e8ebb30a3d5\",\"parentSpanId\":\"920e1021406277a9\",\"name\":\"fullreval_task\",\"kind\":\"SPAN_KIND_INTERNAL\",\"startTimeUnixNano\":\"1716379123221825454\",\"endTimeUnixNano\":\"1716379155367858727\",\"attributes\":[{\"key\":\"market_risk_span\",\"value\":{\"stringValue\":\"true\"}},{\"key\":\"mr_batchId\",\"value\":{\"stringValue\":\"440\"}},{\"key\":\"mr_batchType\",\"value\":{\"stringValue\":\"Full Revaluation\"}},{\"key\":\"mr_bucketName\",\"value\":{\"stringValue\":\"imccBucket#ALL_10_Reduced\"}},{\"key\":\"mr_jobDomain\",\"value\":{\"stringValue\":\"Market Risk\"}},{\"key\":\"mr_jobId\",\"value\":{\"stringValue\":\"Marketing_Bench | 31/03/2016 | 17\"}},{\"key\":\"mr_strategy\",\"value\":{\"stringValue\":\"typo_Bond\"}},{\"key\":\"mr_uuid\",\"value\":{\"stringValue\":\"b1ed4d3a-0e4d-4afa-ad39-7cf6a07c36a9\"}},{\"key\":\"mrb_batch_affinity\",\"value\":{\"stringValue\":\"Marketing_Bench_run_Batch|Marketing_Bench|2016/03/31|17_FullReval0_00029\"}},{\"key\":\"mr_batch_compute_cpu_time\",\"value\":{\"doubleValue\":31.586568}},{\"key\":\"mr_batch_compute_time\",\"value\":{\"doubleValue\":31.777}},{\"key\":\"mr_batch_load_cpu_time\",\"value\":{\"doubleValue\":0.0}},{\"key\":\"mr_batch_load_time\",\"value\":{\"doubleValue\":0.0}},{\"key\":\"mr_batch_status\",\"value\":{\"stringValue\":\"WARNING\"}},{\"key\":\"mr_batch_total_cpu_time\",\"value\":{\"doubleValue\":31.912966}},{\"key\":\"mr_batch_total_time\",\"value\":{\"doubleValue\":32.14}}],\"status\":{}}]}]}]}" | eval eventKey=md5(_raw) | eval attributes=json_array_to_mv(json_extract(_raw,"resourceSpans{}.scopeSpans{}.spans{}.attributes")) | mvexpand attributes | eval attribute_key=json_extract(attributes,"key") | eval attribute_val=coalesce(json_extract(json_extract(attributes,"value"),"stringValue"),json_extract(json_extract(attributes,"value"),"doubleValue")) | eval extracted_{attribute_key}=attribute_val | stats values(extracted_*) as * by eventKey     Basically you're doing   | eval eventKey=md5(_raw) | eval attributes=json_array_to_mv(json_extract(_raw,"resourceSpans{}.scopeSpans{}.spans{}.attributes")) | mvexpand attributes | eval attribute_key=json_extract(attributes,"key") | eval attribute_val=coalesce(json_extract(json_extract(attributes,"value"),"stringValue"),json_extract(json_extract(attributes,"value"),"doubleValue")) | eval extracted_{attribute_key}=attribute_val | stats values(extracted_*) as * by eventKey     Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Splunk's xpath documentation does not show any examples on how to use the xpath command if the XML contains namespace declarations, e.g. <event xmlns='mynamespace'> or <prefix:Event xmlns:prefix='myn... See more...
Splunk's xpath documentation does not show any examples on how to use the xpath command if the XML contains namespace declarations, e.g. <event xmlns='mynamespace'> or <prefix:Event xmlns:prefix='mynamespace'>.   The xpath command will not extract any results unless the event is modified and the namespace declaration(s) removed from the event first.  Probably the most used workaround would be using the spath command instead. However, after some googling about the path syntax for XPath you find there is a special local-name() notation that can be used so the namespace declarations are ignored during parsing.
This worked well Thanks!
Hi @Kenny_splunk  I think the best place to start here is by checking the _audit index to see who is using/searching aginst the index in question... Start off with the following query and take it f... See more...
Hi @Kenny_splunk  I think the best place to start here is by checking the _audit index to see who is using/searching aginst the index in question... Start off with the following query and take it from there: index=_audit search="*<yourIndexName>*" info=completed action=search Its important to remember, however, than some people might search for index=* in order to access a particular index, which might not come up in the above search. They might also use something like win* instead of win_events.  People can use index="yourName", index=yourName, index IN (yourName,anotherName) etc etc which is why I included the wildcards either side for the above sample query. You might want to tune to your environment etc as you see fit! In these logs you should find a number of useful fields, such as "search" (what they ran) and "user" (Who ran it) amonst other things llike event_count and result_count. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
If you want a list of the top 5 hosts reporting into each index then I would look to use the following search:   | tstats count where index=* by host, index | sort - count | streamstats count as n ... See more...
If you want a list of the top 5 hosts reporting into each index then I would look to use the following search:   | tstats count where index=* by host, index | sort - count | streamstats count as n by index | search n<=5 | stats values(host) by index     This Splunk search starts by using tstats to efficiently count events for each host and index, retrieving data across all indexes. It then sorts the results in descending order by event count so that the most active hosts appear first. The streamstats command assigns a running count (n) to each record within its respective index, effectively numbering the hosts within each index. The search n<=5 step filters the results to include only the top 5 hosts per index based on event count. Finally, stats values(host) by index consolidates the results to display the top 5 hosts for each index in a clean format. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @pflaher  I wonder if you could share an example event that you are searching across, as I dont have access to an example dataset for this? One thing you could try, which I have had success in i... See more...
Hi @pflaher  I wonder if you could share an example event that you are searching across, as I dont have access to an example dataset for this? One thing you could try, which I have had success in is using TERM, like this index=firewall sourcetype=cp_log:syslog source=checkpoint:firewall dest="172.24.245.210" TERM(*172.24.245.210*) The wildcards are less than ideal but could help speed up your searches (I found TERM can give 10x faster searches). Depending the data you might be able to do TERM(dest=172.24.245.210) - you could try either. Does this give you a faster response? It would be worth comparing the job inspector for the two searches to see if this improves your response time, fingers crossed! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
@gcusello I never got a chance to do it today but will try tomorrow and report back. 
well TIL… thanks @SanjayReddy 
When I run this query to give me results for the last 24 hours, its takes hours to complete. I would like to run it for say 30 days, but the time it takes would be unreasonable.  index=firewall sour... See more...
When I run this query to give me results for the last 24 hours, its takes hours to complete. I would like to run it for say 30 days, but the time it takes would be unreasonable.  index=firewall sourcetype=cp_log:syslog source=checkpoint:firewall dest="172.24.245.210" | fields dest, src | dedup dest, src | table dest, src I am looking to identify any front end application server that connects to this 172.24.245.210 server    
@kiran_panchavat we are facing similar issue, any chance you can share the py script you received from PP?
The command you're looking for is eval. index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval StatusMsg = case(<<some expression>>, "Task threw an uncaught and unreco... See more...
The command you're looking for is eval. index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval StatusMsg = case(<<some expression>>, "Task threw an uncaught and unrecoverable exception", <<some other expression>>, "Ignoring await stop request for non-present connector", ..., <<a different expression>>, "Connection refused", 1==1, "Unknown") | table host connName StatusMsg  The trick is in selecting the appropriate status message.  You'll need to key off some field(s) in the results.
There is no option in tstats or values to limit the number of values.  You can, however, expand the host field and then limit the number displayed. | tstats values(host) as host where index=* by ind... See more...
There is no option in tstats or values to limit the number of values.  You can, however, expand the host field and then limit the number displayed. | tstats values(host) as host where index=* by index | mvexpand host | dedup 5 index host