Splunk Search

Not a question, but tip for all those running 5.0.1 or 5.0.2

tmeader
Contributor

There is a configuration file default setting error that was made (and confirmed by Splunk support today when I called) in at least the versions noted above. I'm assuming also in 5.0, but can't confirm.

The value of "max_searches_per_cpu" under the "[search]" stanza in etc/system/default/limits.conf was inadvertently changed at some point to "1" instead of what should be (and was in 4.x) the default value of "4".

To correct this, create a limits.conf under etc/system/local (or edit if it exists), and add the following:

[search]
max_searches_per_cpu = 4

Restart Splunk afterwards to pick-up the change. Hope this helps others. Thanks.

1 Solution

hexx
Splunk Employee
Splunk Employee

The lowering of the historical search concurrency limit in 5.x is not inadvertent. This was done by design: The idea is that in 5.x, job queueing in combination with a lower search concurrency will provide a better user experience than high search concurrency and no job queueing.

For more details, see this Splunk Answer.

That being said, if you have established that your system can and should allow a higher number of concurrent searches, feel free to increase the value of these parameters in limits.conf. I would typically recommend to do this progressively and not jump back to max_searches_per_cpu = 4 immediately, but rather start with a value of 2 or 3.

View solution in original post

hexx
Splunk Employee
Splunk Employee

The lowering of the historical search concurrency limit in 5.x is not inadvertent. This was done by design: The idea is that in 5.x, job queueing in combination with a lower search concurrency will provide a better user experience than high search concurrency and no job queueing.

For more details, see this Splunk Answer.

That being said, if you have established that your system can and should allow a higher number of concurrent searches, feel free to increase the value of these parameters in limits.conf. I would typically recommend to do this progressively and not jump back to max_searches_per_cpu = 4 immediately, but rather start with a value of 2 or 3.

hexx
Splunk Employee
Splunk Employee

Thank you. I will make some inquiries.

0 Karma

tmeader
Contributor

The support tech just looked it up, said that he already saw something open for this, and then thanked me for reporting it. Sorry, didn't bother to get an exact number.

0 Karma

hexx
Splunk Employee
Splunk Employee

@tmeader said:
Splunk support stated that there is a bug opened on this setting though, as in "to be changed in the future."

Were you given a reference number for this bug? If yes, would you please share it?

0 Karma

tmeader
Contributor

Splunk support stated that there is a bug opened on this setting though, as in "to be changed in the future." I can say for sure that this setting was NOT optimal and having the effect anticipated in the link you cited on our implementation. We were noticing very few search processes being spawned, and the few existing processes were consuming 4-5GBs or RAM each, while showing "1200%+" CPU usage in top. We had an easily 3-4 fold increase in the frequency of the "you have reached the maximum limit of historical searches etc" messages at the top of the screen, and were seeing noticeable lag on the initial dashboard loads for some apps. Dual 8-cores. Setting this back to 4 has completely fixed this for us.

0 Karma

dart
Splunk Employee
Splunk Employee

This is not an error, it is deliberate - see this answer

yannK
Splunk Employee
Splunk Employee

see http://splunk-base.splunk.com/answers/70679/why-are-the-default-values-of-max_searches_per_cpu-and-b...

FYI the multiplicator value was reduced while the baseline was increased, this is part of a tuning process.

in 4.3.*

# the maximum number of concurrent searches per CPU
max_searches_per_cpu = 4
# the base number of concurrent searches
base_max_searches = 4

=> total = 4+ (nb_cores)*4

and in 5.0.*

# the maximum number of concurrent searches per CPU
max_searches_per_cpu = 1
# the base number of concurrent searches
base_max_searches = 6

=> total = 6+ (nb_cores)*1

hexx
Splunk Employee
Splunk Employee

This is important feedback that we (Splunk Support) would like to (and should!) bring to our developers. If you don't have a case already open against this particular issue, please create a new one.

0 Karma

tmeader
Contributor

The problem with this is that the actual threshold value is MUCH lower with this method evidently. We were being shown a threshold of "22" on a 16 core system, whereas previously the limit was "68". That is a HUGE difference when you are going to keep popping up error messages all the time when this threshold is approached. I would imagine anyone running saved searches with any frequency at all is now seeing warnings constantly.

0 Karma
Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...