At times I have seen users run searches like index=* and let it run, (this user only has restricted access to 3 indexes of our 35 total), this search take up to 7GB of RAM on the Splunk Indexer.
How can we control this, we have 100 Splunk users. In the past some users have pushed the Splunk indexer RAM usage up to 99% and froze the Splunk indexer.
Thank-you so much for the quick reply and excellent suggestions. I will educate my users, would you be able to supply me with a sample search that I can use to determine long running searches please.
Although index=*
is a pretty bad search, it is also a legitimate one, so you can't really use any search filters. There are several things that you can do here but none of them will pay better than educating your users. Other solutions include, (1) changing the default Time Range Picker from All Time to Last 60min or Last 24hr, and/or (2) getting alerted on long running searches and taking corresponding action to kill the offending process.
There are several ways to get that information. But first I would use the _audit index to identify users with historically long running searches and inform them appropriately. Next, to determine currently long running searches I would use the following search | rest /services/search/jobs | search dispatchState=RUNNING | table dispatchState runDuration title
and check the runDuration
field for excessive values - whatever that means in your context.
Thank-you so much for the quick reply and excellent suggestions. I will educate my users, would you be able to supply me with a sample search that I can use to determine long running searches please.