Splunk Search

The maximum number of concurrent historical scheduled searches on this instance has been reached

pdantuuri0411
Explorer

I often see the below entries in the scheduler.log[1] which are getting skipped. We have 15 alerts set in which 2 run every minute, 8 run every 5 minutes and the rest every hour. Because of the concurrency limit, the status is getting skipped and the alerts are not getting triggered.

The concurrency limit is set to five based on the below log.
We are using a 4 core CPU and according to the limits.conf, shouldnt the limit be 10 concurrent searches(6+4*1)?

How to avoid the alerts from getting skipped?

[1]

02-01-2019 13:35:00.283 -0600 INFO SavedSplunker - savedsearch_id="nobody;search;JBoss7 STUCK Thread Alert", search_type="scheduled", user="350961", app="search", savedsearch_name="JBoss7 STUCK Thread Alert", priority=default, status=skipped, reason="The maximum number of concurrent historical scheduled searches on this instance has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_instance-wide", concurrency_limit=5, scheduled_time=1549049640, window_time=0
02-01-2019 13:35:00.284 -0600 INFO SavedSplunker - savedsearch_id="nobody;search;JBoss7 OutOfMemory Alert", search_type="scheduled", user="329421", app="search", savedsearch_name="JBoss7 OutOfMemory Alert", priority=default, status=skipped, reason="The maximum number of concurrent historical scheduled searches on this instance has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_instance-wide", concurrency_limit=5, scheduled_time=1549049640, window_time=0

lakshman239
SplunkTrust
SplunkTrust

Within the current constraint of hardware, you could try the following:

  1. look at home much time each of the alert/search is taking [ using job inspector] and fine tune them to reduce the search time.
  2. look at spreading the searches, e.g. instead of running every hour, run some searches 10mins past the hour, some 20mins past the hours etc..
  3. re-evaluate if you really need to run search every 2mins and what's the earliest and latest time and see if you can reduce the time period.
0 Karma

pdantuuri0411
Explorer

Thank you for your reply.

Do you know why the concurrency limit is 5

From the documentation, isnt it supposed to be 10. 6 + (4*1)

base_max_searches + #cpus*max_searches_per_cpu

The base number of concurrent searches.

base_max_searches = 6

Max real-time searches = max_rt_search_multiplier x max historical searches.

max_rt_search_multiplier = 1

The maximum number of concurrent searches per CPU.

max_searches_per_cpu = 1

0 Karma

psharkey
Explorer

The default value of max_searches_perc in the scheduler stanza of the limits.conf file is 50 (as in 50%).

This is the maximum number of searches the scheduler can run, as a percentage of the maximum number of concurrent searches.

50% of 10 is 5.

Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...