Splunk Dev

Determine values for max concurrent historical searches

sboogaar
Path Finder

Im trying to come up with the values for the amount of max concurrent historical searches because we get the error:

the maximum number of concurrent
historical searches on this instance
has been reached

I know in the limits.conf we can set the values that are used to set the limit:

from splunk docs https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=i...

"
max_searches_per_cpu = < int>

The maximum number of concurrent historical searches for each CPU.
The system-wide limit of historical searches is computed as:
max_hist_searches = max_searches_per_cpu x number_of_cpus + base_max_searches
"
When I check resourse management I dont even see 1 machine with 1% load all are around 0.5%

We have 3 searchheads 2 indexes 1 mgmt.
How should i determine the values for "max_searches_per_cpu " and "base_max_searches" as It seems unnecessary to me that we get errors when our load is so low.

0 Karma

chrisyounger
SplunkTrust
SplunkTrust

There is a really good write-up here: https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/SHCarchitecture#How_the_cluster_handle...

It discusses the pros and cons of quota enforcement by cluster-wide or by member-by-member

Remember that the majority of time spent searching actually happens on the indexers. However the limits are controlled by the specs of the SH boxes. Keep this in mind when adjusting these settings.

Cheers

0 Karma
Get Updates on the Splunk Community!

Celebrating Fast Lane: 2025 Authorized Learning Partner of the Year

At .conf25, Splunk proudly recognized Fast Lane as the 2025 Authorized Learning Partner of the Year. This ...

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...