We are now a days not getting any alerts triggered for our data in environment.
when I did go to /opt/splunk/var/log/splunk/scheduler.log even for few back days.
going through the alerts I found a coomon reason which reads :
savedsearch_name="***************", status=skipped, reason="maxsearches limit reached"
I feel some optimization need to be done incase of searches and saved searches running in the env.
Do we have any command or plan where we can see which searches are running at that time and causing our alerts to be skipped running daily.
This shows you skipped searches:
index=_internal sourcetype=scheduler status=skipped | timechart span="5m" count by savedsearch_name
Skipped typically means a scheduled search did not finish before its next scheduled run should start so that next run is skipped to avoid loading up the queue infinitely. So a good starting place is the searches that are being skipped. Do everything you can to optimize THOSE searches. Consider running them less frequently or over a smaller time range. Consider adding more indexers. Run the
Healthchecks on the
Monitoring Console and fix KNOWN system problems, etc. Above all else, DO NOT run scheduled real-time searches.
Thanks Woodcock....but how can we "Run the Healthchecks on the Monitoring Console" any command or specific processes to follow to find the system faults?
Thanks for your reply this was helpful.