Splunk Search

What should I do, if anything, when approaching the max number of searches?

Splunk Employee
Splunk Employee

Sometimes I see this message in Splunk Web:

You are approaching the maximum number of searches that can be run concurrently. current=15, maximum=18

What should I do about it?

Tags (2)
1 Solution

Splunk Employee
Splunk Employee

If you have hardware to support allocation of more CPU resources to search execution, then it is possible to increase the max number of concurrent searches.

The number of concurrent searches is based on CPU and affected by 2 settings in limits.conf. From limits.conf.spec:

> [search]
> max_searches_per_cpu = <int>
> * the maximum number of concurrent searches per CPU. The system-wide number of searches
> * is computed as max_searches_per_cpu x number_of_cpus + 2
> * Defaults to 2

> [scheduler]
> max_searches_perc = <integer>
> * the maximum number of searches the scheduler can run, as a percentage
> * of the maximum number of concurrent searches, see [search] max_searches_per_cpu
> * for how to set the system wide maximum number of searches
> * Defaults to 25

For an 8 CPU box, the default number of max_searches_per_cpu is 18 (4x4cpu+2). It is likely, however, that such a server is capable of supporting more searches per CPU so it is safe to increment to max_searches_per_cpu=4.

Additionally, if you are running many scheduled searches for alerts or dashboards, you can find a more equitable division with max_searches_perc than the default 25%.

These settings will allow you to maximize your hardware. If you find this is not adequate, consider adding a server. In general, the recommended approach to scaling is to add more CPUs via additional Splunk servers so the workload of search execution can be shared.

View solution in original post

New Member

How do I hide this popup? It's freaking out our users.

0 Karma

Splunk Employee
Splunk Employee

If you have hardware to support allocation of more CPU resources to search execution, then it is possible to increase the max number of concurrent searches.

The number of concurrent searches is based on CPU and affected by 2 settings in limits.conf. From limits.conf.spec:

> [search]
> max_searches_per_cpu = <int>
> * the maximum number of concurrent searches per CPU. The system-wide number of searches
> * is computed as max_searches_per_cpu x number_of_cpus + 2
> * Defaults to 2

> [scheduler]
> max_searches_perc = <integer>
> * the maximum number of searches the scheduler can run, as a percentage
> * of the maximum number of concurrent searches, see [search] max_searches_per_cpu
> * for how to set the system wide maximum number of searches
> * Defaults to 25

For an 8 CPU box, the default number of max_searches_per_cpu is 18 (4x4cpu+2). It is likely, however, that such a server is capable of supporting more searches per CPU so it is safe to increment to max_searches_per_cpu=4.

Additionally, if you are running many scheduled searches for alerts or dashboards, you can find a more equitable division with max_searches_perc than the default 25%.

These settings will allow you to maximize your hardware. If you find this is not adequate, consider adding a server. In general, the recommended approach to scaling is to add more CPUs via additional Splunk servers so the workload of search execution can be shared.

View solution in original post

Splunk Employee
Splunk Employee

max_searches_per_cpu=4 could be a typo. Splunk supports maximum of 2 searches per CPU.
so it should be max_searches_per_cpu=2 or less.

0 Karma

Super Champion

Note that the default was bumped to max_searches_per_cpu=4 in 4.1 (or possibly earlier). So the "Defaults to 2" is no longer accurate. I had bumped this to "3" years ago in my local/limits.conf, so after and upgrade I end up with a lower value than the default. Just a heads up!

Champion

You shouldn't need to do anything. If you do hit the max concurrent search limit, once a search completes, it frees up a spot for the next search in line. Eventually all searches will complete.

It is not necessarily bad if you keep encountering this error so long a the max concurrent searches is not reached. If you consistently reach max concurrent searches you may want to look into staggering your searches.