Alerting

Does saved triggered alerts results utilized user disk usage quota until alert expiry?

eranga
Engager

I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usage.

I could not find anything about this in the logs. Could someone help me to understand relationship between disk quota utilization vs triggered alert retention period?   

Labels (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Please understand that alerts *never* expire.  They will continue to run until you disable or delete them.

What *does* expire are the alert *results*.  That is the data found by the query that ran to trigger (or not) the alert.  That data is kept in the search head and is subject to disk space limits based on the role of the user running the alert.  Without such limits, the SH risks running out of space to use to store more search results.

IMO, there's very little need to preserve alert results beyond the standard 2p.  Perhaps 24 or 72 hours, but not 100 days.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Please understand that alerts *never* expire.  They will continue to run until you disable or delete them.

What *does* expire are the alert *results*.  That is the data found by the query that ran to trigger (or not) the alert.  That data is kept in the search head and is subject to disk space limits based on the role of the user running the alert.  Without such limits, the SH risks running out of space to use to store more search results.

IMO, there's very little need to preserve alert results beyond the standard 2p.  Perhaps 24 or 72 hours, but not 100 days.

---
If this reply helps you, Karma would be appreciated.

eranga
Engager

Thank you for the clarification @richgalloway 

 

 

Get Updates on the Splunk Community!

Video | Welcome Back to Smartness, Pedro

Remember Splunk Community member, Pedro Borges? If you tuned into Episode 2 of our Smartness interview series, ...

Detector Best Practices: Static Thresholds

Introduction In observability monitoring, static thresholds are used to monitor fixed, known values within ...

Expert Tips from Splunk Education, Observability in Action, Plus More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...