Getting Data In

What configuration changes can we make to limit the ability of a user search to consume all the available memory on an indexer?

sat94541
Communicator

All of the indexers in our index cluster are becoming unstable due to a user search. Would like a resource to review our indexer configuration and make config change recommendations on how to limit the ability of a user search to consume all the available memory on an indexer. Uploading diag from one of the affected indexers.

Does Splunk has capability to set some sort of Out of Memory killer, where search gets terminated if memory beyond some threshold is used.

0 Karma

rbal_splunk
Splunk Employee
Splunk Employee

I wanted to share you information on steps you can take to kill the searches that take memory beyond a threshold.

Refer: http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/limitsconf

and three attribute that you need to review are

enable_memory_tracker =

* If memory tracker is disabled, search won't be terminated even if it exceeds the memory limit.
* Must be set to if you want to enable search_process_memory_usage_threshold or
* search_process_memory_usage_percentage_threshold
* By default false.

search_process_memory_usage_threshold =

* To be active, this setting requires setting: enable_memory_tracker = true
* Signifies the maximum memory in MB the search process can consume in RAM.
* Search processes violating the threshold will be terminated.
* If the value is set to zero, then splunk search processes are allowed
to grow unbounded in terms of in memory usage.
* The default value is set to 4000MB or 4GB.

search_process_memory_usage_percentage_threshold =

* To be active, this setting requires setting: enable_memory_tracker = true
* Signifies the percentage of the total memory the search process is entitled to consume.
* Any time the search process violates the threshold percentage the process will be brought down.
* If the value is set to zero, then splunk search processes are allowed to grow unbounded
in terms of percentage memory usage.
* The default value is set to 25%.
* Any number set larger than 100 or less than 0 will be discarded and the default value will be used.

Amritanshu1162
Engager

Where are we pushing the change on the search head cluster or the indexer cluster . And how to check if the setting is getting used because in the monitoring console there is a separate memory usage health check for the indexers and the search heads correct me if i am wrong ,if the spike of memory usage is on the search head do we need to push the limits.conf with enable memory tracker on the search head cluster and vice-versa for the indexers ?

Tags (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

One thing which could help you with bad searches is Splunk Workload Management https://www.splunk.com/en_us/blog/tips-and-tricks/best-practices-for-using-splunk-workload-managemen...

You could define different classes for users and admins etc. 
r. Ismo

Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...