The monitoring console works well as per the above posts, alternatively in Alerts for Splunk Admins (SplunkBase), the simplified version of:
AllSplunkEnterpriseLevel - Splunk Scheduler skipped searches and the reason (github)
Is:
index=_internal sourcetype=scheduler status=skipped source=*scheduler.log
| fillnull concurrency_category concurrency_context concurrency_limit
| stats count, earliest(_time) AS firstSeen, latest(_time) AS lastSeen by savedsearch_id, reason, app, concurrency_category, concurrency_context, concurrency_limit, search_type, user, host
| eval firstSeen = strftime(firstSeen, "%+"), lastSeen=strftime(lastSeen, "%+")
Please hit accept on the most appropriate answer, although up voting is also appreciated 🙂
The Monitoring Console (Settings->Monitoring Console->Search->Scheduler Activity) offers several breakdowns of skipped searches over time. You can click the magnifying glass icon in any of them to open the panel in Search so you can customize it as desired.
Mostly it's about - The maximum number of concurrent running jobs for this historical scheduled search on this instance has been reached (2)
.
If you don't find that in in the MC try this query.
index=_internal source=*scheduler.log "The maximum number of concurrent running jobs for this historical scheduled search"
| timechart count by savedsearch_name
Hey - check out the answer I posted here - https://answers.splunk.com/answers/790088/splunk-searches-delayed.html#answer-790351, I leverage the Monitoring Console really heavily for their built in searches!