Running Splunk 6.5.2 build 67571ef4b87d.
4 searches saved as alerts to send emails when triggered by certain content in an event.
Keep getting errors in the scheduler log and emails are suppressed:
07-12-2017 19:26:11.265 +0200 INFO SavedSplunker - savedsearch_id="nobody;search;Pre-Trade Multi-Account-Alloc", search_type="scheduled", user="paschilke", app="search", savedsearch_name="Pre-Trade Multi-Account-Alloc", priority=default, status=skipped, reason="The maximum number of concurrent running jobs for this real-time scheduled search on this instance has been reached", concurrency_category="real-time_scheduled", concurrency_context="saved-search_instance-wide", concurrency_limit=1, scheduled_time=1499880360, window_time=0
Scanned through the forums and as it looks this is due to a limitation on how many realtime searches can run on the available CPUs.
Have adjusted max_searches_per_cpu and max_rt_search_multiplier according to the documentation but this did not help.
Could not find any information on which role the user has will win in regards to "User-level concurrent search jobs limit", the more restrictive one or the less restrictive one?
Where else can I look?
Any help really appreciated.
I think it is issue.
SPL-133405, SPL-140769, SPL-140814, SPL-140820, SPL-142014
It is fixed in 6.5.4
https://docs.splunk.com/Documentation/Splunk/6.5.4/ReleaseNotes/6.5.4
You can adjust realtime search job quota in authorize.conf
[role_your_role]
cumulativeRTSrchJobsQuota = 20
rtSrchJobsQuota = 20
The least-restrictive setting will take precedence.
Did you restart the search head after changing the settings in limits.conf?
Also, look in savedsearches.conf for the RT search and see if you can raise its priority.
schedule_priority = default | higher | highest