How to adjust the maximum number of concurrent running jobs for this real-time scheduled search?


Running Splunk 6.5.2 build 67571ef4b87d.

4 searches saved as alerts to send emails when triggered by certain content in an event.

Keep getting errors in the scheduler log and emails are suppressed:

07-12-2017 19:26:11.265 +0200 INFO SavedSplunker - savedsearchid="nobody;search;Pre-Trade Multi-Account-Alloc", searchtype="scheduled", user="paschilke", app="search", savedsearchname="Pre-Trade Multi-Account-Alloc", priority=default, status=skipped, reason="The maximum number of concurrent running jobs for this real-time scheduled search on this instance has been reached", concurrencycategory="real-timescheduled", concurrencycontext="saved-searchinstance-wide", concurrencylimit=1, scheduledtime=1499880360, windowtime=0

Scanned through the forums and as it looks this is due to a limitation on how many realtime searches can run on the available CPUs.
Have adjusted maxsearchesper_cpu and maxrtsearch_multiplier according to the documentation but this did not help.

Could not find any information on which role the user has will win in regards to "User-level concurrent search jobs limit", the more restrictive one or the less restrictive one?

Where else can I look?

Any help really appreciated.

I think it is issue.

SPL-133405, SPL-140769, SPL-140814, SPL-140820, SPL-142014

It is fixed in 6.5.4


You can adjust realtime search job quota in authorize.conf

cumulativeRTSrchJobsQuota = 20
rtSrchJobsQuota = 20

0 Karma


The least-restrictive setting will take precedence.

Did you restart the search head after changing the settings in limits.conf?

Also, look in savedsearches.conf for the RT search and see if you can raise its priority.

schedule_priority = default | higher | highest
0 Karma