Splunk Search

The maximum number of concurrent historical scheduled searches on this instance has been reached

pdantuuri0411
Explorer

I often see the below entries in the scheduler.log[1] which are getting skipped. We have 15 alerts set in which 2 run every minute, 8 run every 5 minutes and the rest every hour. Because of the concurrency limit, the status is getting skipped and the alerts are not getting triggered.

The concurrency limit is set to five based on the below log.
We are using a 4 core CPU and according to the limits.conf, shouldnt the limit be 10 concurrent searches(6+4*1)?

How to avoid the alerts from getting skipped?

[1]

02-01-2019 13:35:00.283 -0600 INFO SavedSplunker - savedsearch_id="nobody;search;JBoss7 STUCK Thread Alert", search_type="scheduled", user="350961", app="search", savedsearch_name="JBoss7 STUCK Thread Alert", priority=default, status=skipped, reason="The maximum number of concurrent historical scheduled searches on this instance has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_instance-wide", concurrency_limit=5, scheduled_time=1549049640, window_time=0
02-01-2019 13:35:00.284 -0600 INFO SavedSplunker - savedsearch_id="nobody;search;JBoss7 OutOfMemory Alert", search_type="scheduled", user="329421", app="search", savedsearch_name="JBoss7 OutOfMemory Alert", priority=default, status=skipped, reason="The maximum number of concurrent historical scheduled searches on this instance has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_instance-wide", concurrency_limit=5, scheduled_time=1549049640, window_time=0

lakshman239
SplunkTrust
SplunkTrust

Within the current constraint of hardware, you could try the following:

  1. look at home much time each of the alert/search is taking [ using job inspector] and fine tune them to reduce the search time.
  2. look at spreading the searches, e.g. instead of running every hour, run some searches 10mins past the hour, some 20mins past the hours etc..
  3. re-evaluate if you really need to run search every 2mins and what's the earliest and latest time and see if you can reduce the time period.
0 Karma

pdantuuri0411
Explorer

Thank you for your reply.

Do you know why the concurrency limit is 5

From the documentation, isnt it supposed to be 10. 6 + (4*1)

base_max_searches + #cpus*max_searches_per_cpu

The base number of concurrent searches.

base_max_searches = 6

Max real-time searches = max_rt_search_multiplier x max historical searches.

max_rt_search_multiplier = 1

The maximum number of concurrent searches per CPU.

max_searches_per_cpu = 1

0 Karma

psharkey
Explorer

The default value of max_searches_perc in the scheduler stanza of the limits.conf file is 50 (as in 50%).

This is the maximum number of searches the scheduler can run, as a percentage of the maximum number of concurrent searches.

50% of 10 is 5.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...