Monitoring Splunk
Highlighted

Is the max_searches_per_cpu setting based on CPU's of Index Cluster or just the searchead

Path Finder

We are receiving the message "This instance is approaching the max concurrency searches" on our search head. Usually if we are hitting concurrency for a role, it will mention "...max concurrency for this role.". From what i've read, when the instance hits max concurrency, it has to do with the system wide concurrency setting Splunk calculates based on CPUs.

Does the maxsearchper_cpu need to be applied to the limits.conf of the indexers or the search heads?

Thanks,

0 Karma
Highlighted

Re: Is the max_searches_per_cpu setting based on CPU's of Index Cluster or just the searchead

Legend

Hi @ jordanking1992,
yes, you could modify the limits.conf of your Search Heads to avoid this message, but this doesn't solve the problem, only delays it.

This message gives you a more important information: you have to analyze the load on your Search Heads and check if the number of CPUs is correct for the load that they have to manage!
Remembering that every search takes a CPU, I hint to use the Splunk Monitor Console and analyze searches in the overload period, because you could need to add more resources (CPUs and RAM) or add a new Search Head or a new Indexer.
Or you could have some scheduled search to optimize.

e.g. one of my customers had on three Search Head (each one with 16 CPUs) a dashboard with 12 panels, each one with a complex real time search with two o three subsearches, and this dashboard was usually used by around ten users, so you can understand that the problem wasn't resources, but the search itself!

Ciao.
Giuseppe

View solution in original post

Highlighted

Re: Is the max_searches_per_cpu setting based on CPU's of Index Cluster or just the searchead

Path Finder

Thanks for the response. This is what i was suspecting. The reason for posting is because this time of year we have excessive use of Splunk. Teams put up all there dashboards for Holiday monitoring and we are getting this message. What you described with dashboards and panels is exactly whats happening hear.

As an admin, how do you manage multiple users loading the same dashboard with 12 searches (thus triggering many concurrent searches)? I explain to the users that this is why searches are queued but they do not seem to understand it haha.

0 Karma
Highlighted

Re: Is the max_searches_per_cpu setting based on CPU's of Index Cluster or just the searchead

Legend

Hi @ jordanking1992,
you're welcome!

at first, see if you can optimize dashboards using Post Process Searches: you can do this when you have in a dashboard the same base search and many panels that display different views of the same data ( https://docs.splunk.com/Documentation/Splunk/8.0.0/Viz/Savedsearches#Post-process_searches_2 ).

Then see if you can replace the real time searches with a scheduled report called from the dashboard ( https://docs.splunk.com/Documentation/Splunk/8.0.0/Report/Embedscheduledreports ).

Then see if the searches in your panels are optimized or not: e.g. avoid transaction and join commands, replacing them with stats command.

Ciao and next time!
Giuseppe

0 Karma