Enhancement case #464044
Requesting that Splunk have an option to send an all clear alert after x many iterations of an alert condition not matching after having matched for 1 or more iterations.
Example: An alert is triggered for CPU usage above 95% which may send an email, a notice in Triggered Alerts, run a script, and/or a post to slack.
I would like to have a check box option to send another alert when the condition has cleared. i..e CPU has dropped below 95%
Ideally this would be a checkbox with an option for how many negative matches before an all clear alert is sent.
If I check cpu average every 5 minutes and get these values:
check01 5:05 PM CPU 95%
check02 5:10 PM CPU 95%
check03 5:15 PM CPU 75%
check04 5:20 PM CPU 75%
check01 would trigger the initial alert
check03 would trigger the all clear
Enhancement case #464044 opened.
We would like to integrate Splunk Enterprise with Remedy through IBM Netcool alarm monitoring tool to automate the Incident monitoring.
IBM Netcool team would expect Splunk to send the alert through Webhook and Netcool will forward events to Remedy to create Automatic Incident in the ticketing tool.
Once the ticket created and we will follow the incident management process and close the ticket. By the time Remedy tool doesn't allow to close the Ticket, because the alert is still active in Netcool.
Netcool expects Splunk Enterprise to send the alert clear notifications to One FM again to clear the alarm in Netcool and close the ticket in Remedy.
I see the initial post started on 2017 and almost three years that Splunk didn't acknowledge this feature or workaround for the query...
I think my last one took about two years 🙂 Let's hope this one is faster as it is relevant to a wide Splunk audience. Also many Splunk competitors have this out of the box. If not, I will have to write my own shell script and lookup table solution.
i played with this idea before the alert action framework came out, and we achieved alert clearing with the script triggering snmp by using an alert to trigger, then kvstore to flip a state field, then had an inverse search that checked for the alert to be triggered, then watch for the condition to drop below again then it would trigger....
I am sure you could do similar with the slack api...
I'll play with this idea again and see if I can tell a good story for an enhancement request.
Ah, therein lies the beauty I see in Splunk. You are only truly limited by what you can crank out.
some see request a feature...some just build it...
Much easier to convince ppl to build something when you show a working example 😉
You would like to clear the
alert or clear the
throttle? I don't see much point in the former but the latter could be potentially useful. You should be able to use the REST API to delete the alerts based on an additional search that looks for the
My main goal is to send out an all clear notice after x negative matches on the alert criteria which would then have the same notification options as the original alert. I imagine some would also like to see an auto-delete/acknowledgement after something has self-resolved.