I'm getting this message on the Indexer Master for my Cluster when I open the Monitoring Console. On which server should I modify the limits.conf file? The Indexer Master? Each Indexer? Both?
Since this is happening on the stock Monitoring Console Dashboard, I wouldn't think bad design (which is often mentioned in other answers to this issue) is the problem.
This message means that you are attempting to run more searches at once than your Splunk server can handle. It isn't an error; eventually all the necessary searches will run - they just can't all run simultaneously.
The maximum number of historical searches that you can run is determined by 2 things: the settings for your role, and the server maximum limit.
You can change the maximum for your role, although since you are running as the admin, your max is probably already unlimited.
It is more likely than you are hitting the maximum for your server. This maximum is set in limits.conf based on the number of cores on your Splunk server. It reflects the fact that each simultaneous search requires a core. While you could change the limit, that ultimately does not change the hardware resources of the server. In other words, that isn't going to actually help.
I would just accept this for what it is: a warning that you are running a lot of searches at once. If that is truly problematic, then you need to add more hardware to your master node.
this link is not supported by splunk, but it was clear enough about causes, solutions and recommendations.
This will fix the problem presented to me about the error message
https://splunkonbigdata.com/2020/07/21/concurrent-historical-searches-in-splunk/
Hi
I must disagree with you, that this is a solution, which fix the root cause. It just gets ride of that message (symptom), but the root cause (lack of resources) is still there.
Of course you can add search concurrency with this attribute ,but it also means that CPU/scheduler must start to schedule tasks with more task (jobs/threads) and it means that those tasks probably take more time than what it takes with those Splunk's recommended values.
r. Ismo
I understand your point, and I know that it is not a definitive solution, but it is a solution in the face of the queuing was causing Splunk ES to show no information at all to at least provide a solution while reviewing in detail the issue of scheduled queries.
It is clear, concise, visual and well-documented information, which is just what users in free or paid versions look for to understand the errors or alerts that the tool presents.
I only share my point of view, it cannot be assumed that based on your knowledge what is obvious for you, is also obvious for all users who visit the forum.
and of the few people like you who answer questions, I really want to thank.
Have you tried stopping / deleting jobs that are "paused" from the job manager? I had a similar issue and this is resolved it vice changing the .conf.
Go to your DMC and click on the MC > Search > Activity > Search Activity: Instance
There you'll see your search concurrency (Running/Limit), and below that you can search the activity by the user and find out whose the culprit. Then go to the job manager and stop / delete their searches.
Just a suggestion.
http://docs.splunk.com/Documentation/Splunk/7.1.2/Search/SupervisejobswiththeJobspage
This message means that you are attempting to run more searches at once than your Splunk server can handle. It isn't an error; eventually all the necessary searches will run - they just can't all run simultaneously.
The maximum number of historical searches that you can run is determined by 2 things: the settings for your role, and the server maximum limit.
You can change the maximum for your role, although since you are running as the admin, your max is probably already unlimited.
It is more likely than you are hitting the maximum for your server. This maximum is set in limits.conf based on the number of cores on your Splunk server. It reflects the fact that each simultaneous search requires a core. While you could change the limit, that ultimately does not change the hardware resources of the server. In other words, that isn't going to actually help.
I would just accept this for what it is: a warning that you are running a lot of searches at once. If that is truly problematic, then you need to add more hardware to your master node.
Nice explanation.
Thank you @lgunin
I see the same error in Splunk Monitoring console. We have separate indexers for prod and non-prod and Distributed search head on a different VM talking to both of these indexers.
Do I have to go and check concurrent searches values defined in limit.conf on both the indexers?
Are these the values you are talking about ?
max_searches_per_cpu
base_max_searches
max_rt_search_multiplier
Our Prod indexer has : 18 cpu cores
and Non Prod Indexer has : 8 cpu cores
So depending on resources , I can update the above values
In Prod:
max_searches_per_cpu = 18
base_max_searches= 8
max_rt_search_multiplier = 4
In Non-Prod:
max_searches_per_cpu = 8
base_max_searches = 5
max_rt_search_multiplier = 4
Can you please let me know if these thresholds makes sense?
Thanks
Divya