If you agree, please upvote so Splunk will prioritize for a future release. Enhancement case #464044
Requesting that Splunk have an option to send an all clear alert after x many iterations of an alert condition not matching after having matched for 1 or more iterations.
Example: An alert is triggered for CPU usage above 95% which may send an email, a notice in Triggered Alerts, run a script, and/or a post to slack.
I would like to have a check box option to send another alert when the condition has cleared. i..e CPU has dropped below 95%
Ideally this would be a checkbox with an option for how many negative matches before an all clear alert is sent.
If I check cpu average every 5 minutes and get these values:
check01 5:05 PM CPU 95%
check02 5:10 PM CPU 95%
check03 5:15 PM CPU 75%
check04 5:20 PM CPU 75%
check01 would trigger the initial alert
check03 would trigger the all clear
Enhancement case #464044 opened.
You would like to clear the
alert or clear the
throttle? I don't see much point in the former but the latter could be potentially useful. You should be able to use the REST API to delete the alerts based on an additional search that looks for the
My main goal is to send out an all clear notice after x negative matches on the alert criteria which would then have the same notification options as the original alert. I imagine some would also like to see an auto-delete/acknowledgement after something has self-resolved.
@woodcock, I was only using the throttle example because it also has a checkbox in the alerting section. I'll edit as not to cause confusion. Thanks.
i played with this idea before the alert action framework came out, and we achieved alert clearing with the script triggering snmp by using an alert to trigger, then kvstore to flip a state field, then had an inverse search that checked for the alert to be triggered, then watch for the condition to drop below again then it would trigger....
I am sure you could do similar with the slack api...
I'll play with this idea again and see if I can tell a good story for an enhancement request.
So in essence you built your own alerting framework that you can completely control. That is one way around limitations in the built-in framework!
Ah, therein lies the beauty I see in Splunk. You are only truly limited by what you can crank out.
some see request a feature...some just build it...
Much easier to convince ppl to build something when you show a working example 😉
Do you have an enhancement request logged with Splunk via the support system as well? I think this feature would be useful in some scenarios...
yeah always log things you want to see in the support portal as enhancement request! Doesn't guarantee anything but gotta get it on record!
I think my last one took about two years 🙂 Let's hope this one is faster as it is relevant to a wide Splunk audience. Also many Splunk competitors have this out of the box. If not, I will have to write my own shell script and lookup table solution.