Splunk Search

Is the search concurrency and limits.conf too high?

skirven
Communicator

Hi!
I'm wrestling with performance on our Production Splunk installation and have been reading on Search Concurrency and limits.conf. I'm trying to reconcile my information to make sure I'm understanding what I'm looking at.

In my current limits.conf (which I contend is too high)
[search]

base_max_searches=100
max_searches_per_cpu=10
dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0

We have 15 SH's with 16 CPUs each. What I'm trying to wrap my head around is in the DMC, in the SH section, drop down to "Search Concurrency". and I'm seeing one or two servers with a higher number, and most with low or 0 in there. My thought is that with the numbers being so high, the system is trying to overtax the system with processes, and bogging down one or 2 SHs and never really leveraging the power of the 15 SH cluster.

We experience crashes of the system, or where the Search Head goes down at the API level, etc.

Would it be best to put the limits.conf to something like:

base_max_searches=6
max_searches_per_cpu=1
dispatch_dir_warning_size = 10000
max_rawsize_perchunk = 0
max_searches_perc = 50

On a 16 Core server, that may give us:

Max Total Searches of 28
Max Scheduled Searches of 14

I'm still learning and reading, so I'd like some input to validate the findings.
Thank you,
Stephen Kirven

adonio
Ultra Champion
0 Karma
Get Updates on the Splunk Community!

Introducing Ingest Actions: Filter, Mask, Route, Repeat

WATCH NOW Ingest Actions (IA) is the best new way to easily filter, mask and route your data in Splunk® ...

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...