We've all seen this message. Disk usage quota (user-level) has been reached. usage=540MB quota=500MB. Then after a while the user concurrent search quota error will hit due to queuing. Then their other scheduled searches get skipped. All the while the user has no idea if they aren't actively logged in to Splunk.
Q1 : Following query should give you user quota usage
| rest splunk_server=local /services/search/jobs | eval diskUsageMB=diskUsage/1024/1024 | rename eai:acl.owner as UserName | stats sum(diskUsageMB) as totalDiskUsage by UserName
Q2 : You can add where condition to check totalDiskUsage > yourLimit and saved that search and create a schedule and alert.
Q3 : You can setup another job to run every 5 min and delete the jobs older than 5 min from dispatch directory ( $SPLUNK_HOME$/var/run/splunk/dispatch).
You may want to add some checks for whether the email even exists or is valid in your company, and maybe add more info to the email as well.
Adding to old thread.. Here is a search which shows searches which were queued due to quota limits.. could be helpful to identify scheduled jobs which could be re-arranged.
index=_audit search_id="*" sourcetype=audittrail action=search info=granted
| table _time host user action info search_id search ttl
| join search_id
[ search index=_internal sourcetype=splunkd quota component=DispatchManager log_level=WARN reason="The maximum disk usage quota*"
| dedup search_id
| table _time host sourcetype log_evel component username search_id, reason
| eval search_id = "'" + search_id + "'"
]
| table _time host user action info search_id search ttl reason
In Alerts for SplunkAdmins or github I have searches such as "SearchHeadLevel - Users exceeding the disk quota" or "AllSplunkEnterpriseLevel - Splunk Scheduler skipped searches and the reason" which would also cover these scenarios
that app is amazing! thank you for sharing!
A2: You can (ab)use map to send emails individually using fields from your results as destination without writing custom search commands or your own alert script. Rough sketch based on above:
| rest | eval | rename | stats
| where totalDiskUsage > 123
| join type=left UserName
[ rest splunk_server=local /services/authentication/users
| rename title as UserName | fields UserName email]
| map search="stats count | sendemail to=$email$ subject="Quota Exceeded: $totalDiskUsage$"
The map will run its search once per result, sending one email each.
A2. See this (need lot of work)
http://answers.splunk.com/answers/54670/send-an-e-mail-to-a-variable-located-in-your-results
AND
http://answers.splunk.com/answers/23476/alert-sending-email-based-on-value-of-a-field
A3 : I was talking about having a script which does the delete. You can configure this script to run as alert action of a scheduled search.
A1. Exactly what I was looking for, thanks!
A2. My question is more along how do I alert to dynamically email only those email addresses in the results of the search? Is there some $emailField$ type of thing? I get how to limit my search results. I want one alert for all users, having one alert per user isn't an option with 200+ users.
A3. Where can I find information on how to perform delete commands in line of Splunk searches? I'd prefer to delete jobs that are old and haven't been accessed, or are duplicate, by user IF their quota is full.