Knowledge Management

High RAM usage on Splunk Indexer

rdelmark
Explorer

At times I have seen users run searches like index=* and let it run, (this user only has restricted access to 3 indexes of our 35 total), this search take up to 7GB of RAM on the Splunk Indexer.

How can we control this, we have 100 Splunk users. In the past some users have pushed the Splunk indexer RAM usage up to 99% and froze the Splunk indexer.

Tags (2)
0 Karma

rdelmark
Explorer

Thank-you so much for the quick reply and excellent suggestions. I will educate my users, would you be able to supply me with a sample search that I can use to determine long running searches please.

0 Karma

_d_
Splunk Employee
Splunk Employee

Although index=* is a pretty bad search, it is also a legitimate one, so you can't really use any search filters. There are several things that you can do here but none of them will pay better than educating your users. Other solutions include, (1) changing the default Time Range Picker from All Time to Last 60min or Last 24hr, and/or (2) getting alerted on long running searches and taking corresponding action to kill the offending process.

_d_
Splunk Employee
Splunk Employee

There are several ways to get that information. But first I would use the _audit index to identify users with historically long running searches and inform them appropriately. Next, to determine currently long running searches I would use the following search | rest /services/search/jobs | search dispatchState=RUNNING | table dispatchState runDuration title and check the runDuration field for excessive values - whatever that means in your context.

0 Karma

rdelmark
Explorer

Thank-you so much for the quick reply and excellent suggestions. I will educate my users, would you be able to supply me with a sample search that I can use to determine long running searches please.

0 Karma
.conf21 Now Fully Virtual!
Register for FREE Today!

We've made .conf21 totally virtual and totally FREE! Our completely online experience will run from 10/19 through 10/20 with some additional events, too!