One of our Splunk searches that just searches for all events in an index for the last 24hrs used to be blazingly fast now its taking up to 10 min to retrieve the data. What can be done to troubleshoot? We received a message yesterday noting that field extractions were taking unusually long. Any ideas? Thanks!
one thing you could do is install the Splunk on Splunk app, it was created by Splunk's Suport team to help them troubleshoot user issues:
you could also review the search in the Search Job Inspector:
in particular, you can see what parts of the search are taking the most resources:
however, if the search hasn't changed, it's probable that the bottleneck is elsewhere in the system--i'm betting SoS can help.
Looks like I found the issue. This one log was dumping in hundreds of exception messages that were exceeding 200+ lines. Is there a way to tell splunk to only look at the first so many lines of a message when pulling it in? IE look at an event and only pull in the first 50 lines or so?
Thanks for the feedback. We do have the SoS app installed but I couldnt notice any blaring errors in the logs. By looking at the inspector I see that much of the time spent on the command.search, command.search.kv, dispatch.fetch, and dispatch.stream.remote.