Splunk Search

The maximum number of concurrent historical searches on this instance has been reached.

gregbo
Communicator

I'm getting this message on the Indexer Master for my Cluster when I open the Monitoring Console. On which server should I modify the limits.conf file? The Indexer Master? Each Indexer? Both?

Since this is happening on the stock Monitoring Console Dashboard, I wouldn't think bad design (which is often mentioned in other answers to this issue) is the problem.

1 Solution

lguinn2
Legend

This message means that you are attempting to run more searches at once than your Splunk server can handle. It isn't an error; eventually all the necessary searches will run - they just can't all run simultaneously.

The maximum number of historical searches that you can run is determined by 2 things: the settings for your role, and the server maximum limit.

You can change the maximum for your role, although since you are running as the admin, your max is probably already unlimited.

It is more likely than you are hitting the maximum for your server. This maximum is set in limits.conf based on the number of cores on your Splunk server. It reflects the fact that each simultaneous search requires a core. While you could change the limit, that ultimately does not change the hardware resources of the server. In other words, that isn't going to actually help.

I would just accept this for what it is: a warning that you are running a lot of searches at once. If that is truly problematic, then you need to add more hardware to your master node.

View solution in original post

splunkcol
Builder

 

this link is not supported by splunk, but it was clear enough about causes, solutions and recommendations.

This will fix the problem presented to me about the error message

https://splunkonbigdata.com/2020/07/21/concurrent-historical-searches-in-splunk/

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

I must disagree with you, that this is a solution, which fix the root cause. It just gets ride of that message (symptom), but the root cause (lack of resources) is still there.

Of course you can add search concurrency with this attribute ,but it also means that CPU/scheduler must start to schedule tasks with more task (jobs/threads) and it means that those tasks probably take more time than what it takes with those Splunk's recommended values.

r. Ismo

0 Karma

splunkcol
Builder

I understand your point, and I know that it is not a definitive solution, but it is a solution in the face of the queuing was causing Splunk ES to show no information at all to at least provide a solution while reviewing in detail the issue of scheduled queries.

It is clear, concise, visual and well-documented information, which is just what users in free or paid versions look for to understand the errors or alerts that the tool presents.

isoutamo
SplunkTrust
SplunkTrust
That's true. Just like 1+1=1 (if you put chicken and fox on the same room, and count tomorrow how many animal there is 😉

splunkcol
Builder

I only share my point of view, it cannot be assumed that based on your knowledge what is obvious for you, is also obvious for all users who visit the forum.

and of the few people like you who answer questions, I really want to thank.

CodyQ
Explorer

Have you tried stopping / deleting jobs that are "paused" from the job manager? I had a similar issue and this is resolved it vice changing the .conf.

Go to your DMC and click on the MC > Search > Activity > Search Activity: Instance

There you'll see your search concurrency (Running/Limit), and below that you can search the activity by the user and find out whose the culprit. Then go to the job manager and stop / delete their searches.

Just a suggestion.

http://docs.splunk.com/Documentation/Splunk/7.1.2/Search/SupervisejobswiththeJobspage

0 Karma

lguinn2
Legend

This message means that you are attempting to run more searches at once than your Splunk server can handle. It isn't an error; eventually all the necessary searches will run - they just can't all run simultaneously.

The maximum number of historical searches that you can run is determined by 2 things: the settings for your role, and the server maximum limit.

You can change the maximum for your role, although since you are running as the admin, your max is probably already unlimited.

It is more likely than you are hitting the maximum for your server. This maximum is set in limits.conf based on the number of cores on your Splunk server. It reflects the fact that each simultaneous search requires a core. While you could change the limit, that ultimately does not change the hardware resources of the server. In other words, that isn't going to actually help.

I would just accept this for what it is: a warning that you are running a lot of searches at once. If that is truly problematic, then you need to add more hardware to your master node.

w199284
Explorer

Nice explanation.

0 Karma

divyamudundi
Path Finder

Thank you @lgunin

I see the same error in Splunk Monitoring console. We have separate indexers for prod and non-prod and Distributed search head on a different VM talking to both of these indexers.

Do I have to go and check concurrent searches values defined in limit.conf on both the indexers?
Are these the values you are talking about ?
max_searches_per_cpu
base_max_searches
max_rt_search_multiplier

Our Prod indexer has : 18 cpu cores
and Non Prod Indexer has : 8 cpu cores

So depending on resources , I can update the above values
In Prod:
max_searches_per_cpu = 18
base_max_searches= 8
max_rt_search_multiplier = 4

In Non-Prod:
max_searches_per_cpu = 8
base_max_searches = 5
max_rt_search_multiplier = 4

Can you please let me know if these thresholds makes sense?

Thanks
Divya

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...