You can adjust this, but the primary calculation is a practical limit on performance. This is controlled in limits.conf.
The maximum number of concurrent searches is calculated based on max_searches_per_cpu
times the number of CPU cores in the system (as reported by the OS, which means VMs often lie when given multiple vCPUs but are running on a small number of hardware cores thanks to threading and shady hypervisor oversubscription models) + base_max_searches
.
max_searches_per_cpu
defaults to 1
base_max_searches
defaults to 6
Therefore, in a reference system with 12 cores, you have 1 x 12 + 6 = 18
.
If you need to crank this up, you can linearly scale this by core count using max_searches_per_cpu
by setting it to 2 or more. This changes the math to be 2 x 12 + 6 = 30
or 3 x 12 + 6 = 42
. If you just want to tweak it up by a fixed amount, adjust base_max_searches
to a higher value.
All of the above comes with the caveat that generally these are bad ideas to implement in a production setting unless you have a highly underutilized system performing large numbers of extremely low memory and low CPU usage searches.
The following is the current configuration in our environment. We are getting max_concurrent limit reached
on one of the three Search Heads for specific Saved Search. It does not happen on rest of the two Search Heads.
Search Head:- 3x16+10=58
Search Peer :- 1x20+6=26
Is it necessary to maintain identical parameters on both Search peers and Search heads?
Shall i increase max_searches_per_cpu
on Search Peers
from 1
to 2
?