Splunk Search

Search Query consuming high memory utilization on indexers

New Member


I am trying to find a list of search queries in a specific time frame that consumed high memory on the indexers.
We have an indexer cluster of 40 indexers and search head cluster of 4 SHs, suddenly for a short span of time we experienced high memory utilization on 33 indexers and consequently 2 SHs also went down.

Please help in generating the query and understanding the cause of such behavior.

Tags (1)
0 Karma


Try something like this...

index=_audit action="search" info="completed" NOT user="splunk-system-user"
| table user, is_realtime, total_run_time, exec_time ,result_count 
| eval exec_time=strftime(exec_time,"%m/%d/%Y %H:%M:%S:%3Q") 
| sort 0 - total_run_time

If something is chewing up a lot of resources, it's going to have a high total_run_time, so that query should float it up to the top. You can limit it to the time in question, plus a little before and after, and it should give you a few candidates to check for a resource hog.

You can also add to the initial search is_realtime=1, to look just at any realtime searches. They tend to be massive cpu sucks, so check them out as well.

0 Karma

New Member

Thanks, Can we also get a splunk query to know which processes are consuming high memory on indexers...

0 Karma
Get Updates on the Splunk Community!

.conf23 Registration is Now Open!

Time to toss the .conf-etti 🎉 —  .conf23 registration is open!   Join us in Las Vegas July 17-20 for ...

Don't wait! Accept the Mission Possible: Splunk Adoption Challenge Now and Win ...

Attention everyone! We have exciting news to share! We are recruiting new members for the Mission Possible: ...

Unify Your SecOps with Splunk Mission Control

In today’s post, I'm excited to share some recent Splunk Mission Control innovations. With Splunk Mission ...