Splunk Search

Is the search concurrency and limits.conf too high?


I'm wrestling with performance on our Production Splunk installation and have been reading on Search Concurrency and limits.conf. I'm trying to reconcile my information to make sure I'm understanding what I'm looking at.

In my current limits.conf (which I contend is too high)

dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0

We have 15 SH's with 16 CPUs each. What I'm trying to wrap my head around is in the DMC, in the SH section, drop down to "Search Concurrency". and I'm seeing one or two servers with a higher number, and most with low or 0 in there. My thought is that with the numbers being so high, the system is trying to overtax the system with processes, and bogging down one or 2 SHs and never really leveraging the power of the 15 SH cluster.

We experience crashes of the system, or where the Search Head goes down at the API level, etc.

Would it be best to put the limits.conf to something like:

dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0
max_searches_perc = 50

On a 16 Core server, that may give us:

Max Total Searches of 28
Max Scheduled Searches of 14

I'm still learning and reading, so I'd like some input to validate the findings.
Thank you,
Stephen Kirven

Ultra Champion
0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...