I have a search that a user recently moved from every hour to every 10 minutes. Cron:
3-59/10 * * * *
The search takes ~2 minutes to run.
The window is set to auto.
BUT, I see the issue:
10-25-2022 06:13:00.633 +0000 INFO SavedSplunker - savedsearch_id="nobody;rcc; Pull: Pull Domain IOCs from MISP", search_type="scheduled", user="thatOneGuyCausingProblemsForMe", app="myApp", savedsearch_name="Pull: Pull Domain IOCs from MISP", priority=default, status=skipped, reason="The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_cluster-wide", concurrency_limit=1, scheduled_time=1666678380, window_time=-1, skipped_count=1, filtered_count=0
We have 4 very similar searches (similar schedule, duration, window, etc), all with the same error. The error fires off very consistently.
Splunk's complaint is that the given search is trying to run when another instance of the same search is running. But the searches only take ~2 minutes, and there is 10 minutes between them.
I understand I can go into the limits.conf and change concurrency, but I do not see how these searches are overlapping themselves? I don't want to just hide the problem behind more CPUs