Splunk Search

Why are the default values of max_searches_per_cpu and base_max_searches in limits.conf lowered in 5.x?

mchang_splunk
Splunk Employee
Splunk Employee

After upgrading to 5.0, I find the default value of max_searches_per_cpu and base_max_searches in /etc/system/default/limits.conf have been changed.

In $SPLUNK_HOME/etc/system/default/limits.conf:

4.x:

# the maximum number of concurrent searches per CPU
max_searches_per_cpu = 4

# the base number of concurrent searches
base_max_searches = 4

5.x:

# the maximum number of concurrent searches per CPU
max_searches_per_cpu = 1

# the base number of concurrent searches
base_max_searches = 6

This means that on a server with 4 CPU cores, Splunk 5.x will limit the number of concurrent searches to 10 (6 + 4 * 1) where the limit would have been 20 (4 + 4 * 4) with Splunk 4.x.

Why was this change made?

1 Solution

mchang_splunk
Splunk Employee
Splunk Employee

These values were changed because in 5.x and beyond, search jobs started from the UI can now be queued when the search concurrency limit is reached instead of being refused. The back-end (splunkd) had this capability since 4.2, but the UI can only handle queued jobs since 5.0.

The bottom line is that in 5.x, the maximum number of concurrent searches has been lowered but it should be compensated by the full implementation of search job queueing. Overall, the goal is to improve the search experience on systems with high search concurrency: your search might get queued up for a bit (ideally, no more than a few seconds) but it should run faster when launched as there will be fewer searches contending for the same resources (most notably, disk I/O).

Further reading on this topic:

View solution in original post

mchang_splunk
Splunk Employee
Splunk Employee

These values were changed because in 5.x and beyond, search jobs started from the UI can now be queued when the search concurrency limit is reached instead of being refused. The back-end (splunkd) had this capability since 4.2, but the UI can only handle queued jobs since 5.0.

The bottom line is that in 5.x, the maximum number of concurrent searches has been lowered but it should be compensated by the full implementation of search job queueing. Overall, the goal is to improve the search experience on systems with high search concurrency: your search might get queued up for a bit (ideally, no more than a few seconds) but it should run faster when launched as there will be fewer searches contending for the same resources (most notably, disk I/O).

Further reading on this topic:

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...