We are currently running Splunk 6.2.3. One user has created an alert which for some reason is being skipped with the reason "Out of search disk space".
04-07-2016 23:55:01.126 -0400 INFO SavedSplunker - savedsearch_id="nobody;custom_app;Watch Team - After Hours", user="user1", app="custom_app", savedsearch_name="Watch Team - After Hours", status=skipped, reason="Out of search disk space.", scheduled_time=1460087700
04-07-2016 23:55:01.126 -0400 WARN SavedSplunker - Max alive instance_count=1 reached for savedsearch_id="nobody;custom_app;Watch Team - After Hours"
A user who is a member of the admin role cloned and scheduled the search, and it ran without issue.
After investigating the issue I found "rtSrchJobsQuota = 6" in the /etc/system/default/authorize.conf file. The user in question had previously configured/scheduled 6 searches/alerts - this would be the 7th. Am I correct in deducing that the rtSrchjobsQuota value is what is preventing this alert from running? If yes, would scheduled searches/alerts be considered "real time"?
Thank you.
Hi adamblock2, based on the error it seems likely that the user is running into a srchDiskQuota limitation. Check the spec on authorize.conf for more info http://docs.splunk.com/Documentation/Splunk/latest/Admin/authorizeconf
Please let me know if this answers your question! 😄
What query could I use to ascertain how close a user is to the srchDiskQuota limit?