Splunk Search

Search Query consuming high memory utilization on indexers

harshsri21
New Member

Hi,

I am trying to find a list of search queries in a specific time frame that consumed high memory on the indexers.
We have an indexer cluster of 40 indexers and search head cluster of 4 SHs, suddenly for a short span of time we experienced high memory utilization on 33 indexers and consequently 2 SHs also went down.

Please help in generating the query and understanding the cause of such behavior.

Tags (1)
0 Karma

DalJeanis
Legend

Try something like this...

index=_audit action="search" info="completed" NOT user="splunk-system-user"
| table user, is_realtime, total_run_time, exec_time ,result_count 
| eval exec_time=strftime(exec_time,"%m/%d/%Y %H:%M:%S:%3Q") 
| sort 0 - total_run_time

If something is chewing up a lot of resources, it's going to have a high total_run_time, so that query should float it up to the top. You can limit it to the time in question, plus a little before and after, and it should give you a few candidates to check for a resource hog.

You can also add to the initial search is_realtime=1, to look just at any realtime searches. They tend to be massive cpu sucks, so check them out as well.

0 Karma

harshsri21
New Member

Thanks, Can we also get a splunk query to know which processes are consuming high memory on indexers...

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...