Splunk Search

Is the search concurrency and limits.conf too high?

skirven
Communicator

Hi!
I'm wrestling with performance on our Production Splunk installation and have been reading on Search Concurrency and limits.conf. I'm trying to reconcile my information to make sure I'm understanding what I'm looking at.

In my current limits.conf (which I contend is too high)
[search]

base_max_searches=100
max_searches_per_cpu=10
dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0

We have 15 SH's with 16 CPUs each. What I'm trying to wrap my head around is in the DMC, in the SH section, drop down to "Search Concurrency". and I'm seeing one or two servers with a higher number, and most with low or 0 in there. My thought is that with the numbers being so high, the system is trying to overtax the system with processes, and bogging down one or 2 SHs and never really leveraging the power of the 15 SH cluster.

We experience crashes of the system, or where the Search Head goes down at the API level, etc.

Would it be best to put the limits.conf to something like:

base_max_searches=6
max_searches_per_cpu=1
dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0
max_searches_perc = 50

On a 16 Core server, that may give us:

Max Total Searches of 28
Max Scheduled Searches of 14

I'm still learning and reading, so I'd like some input to validate the findings.
Thank you,
Stephen Kirven

adonio
Ultra Champion
0 Karma
Get Updates on the Splunk Community!

Index This | When is October more than just the tenth month?

October 2025 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Observe and Secure All Apps with Splunk

  Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What’s New & Next in Splunk SOAR

 Security teams today are dealing with more alerts, more tools, and more pressure than ever.  Join us for an ...