Reporting

FEATURE REQUEST: Splunk Alert: All Clear Notification

Motivator

If you agree, please upvote so Splunk will prioritize for a future release. Enhancement case #464044

Requesting that Splunk have an option to send an all clear alert after x many iterations of an alert condition not matching after having matched for 1 or more iterations.

Example: An alert is triggered for CPU usage above 95% which may send an email, a notice in Triggered Alerts, run a script, and/or a post to slack.

I would like to have a check box option to send another alert when the condition has cleared. i..e CPU has dropped below 95%

Ideally this would be a checkbox with an option for how many negative matches before an all clear alert is sent.

If I check cpu average every 5 minutes and get these values:
check01 5:05 PM CPU 95%
check02 5:10 PM CPU 95%
check03 5:15 PM CPU 75%
check04 5:20 PM CPU 75%

check01 would trigger the initial alert
check03 would trigger the all clear

Enhancement case #464044 opened.

Observer

Hi All,

We would like to integrate Splunk Enterprise with Remedy through IBM Netcool alarm monitoring tool to automate the Incident monitoring.

IBM Netcool team would expect Splunk to send the alert through Webhook and Netcool will forward events to Remedy to create Automatic Incident in the ticketing tool.

Once the ticket created and we will follow the incident management process and close the ticket. By the time Remedy tool doesn't allow to close the Ticket, because the alert is still active in Netcool.

Netcool expects Splunk Enterprise to send the alert clear notifications to One FM again to clear the alarm in Netcool and close the ticket in Remedy.

I see the initial post started on 2017 and almost three years that Splunk didn't acknowledge this feature or workaround for the query...

 

Thanks
Venkatesh.P

 

0 Karma

Influencer

Is this feature still under consideration? Hope to have it soon.

0 Karma

SplunkTrust
SplunkTrust

Ideas.splunk.com is where enhancement requests live and they are visible to anyone once logged in.

Perhaps check if it's there? I thought I saw something similar but I would have to check 

0 Karma

Path Finder

Very interested in this functionality! Was just about to raise a request for this myself.

0 Karma

Splunk Employee
Splunk Employee

please do. the more tickets the better

Path Finder

Same here, very good idea to request this feature.

SplunkTrust
SplunkTrust

Do you have an enhancement request logged with Splunk via the support system as well? I think this feature would be useful in some scenarios...

0 Karma

Splunk Employee
Splunk Employee

yeah always log things you want to see in the support portal as enhancement request! Doesn't guarantee anything but gotta get it on record!

0 Karma

Motivator

I think my last one took about two years 🙂 Let's hope this one is faster as it is relevant to a wide Splunk audience. Also many Splunk competitors have this out of the box. If not, I will have to write my own shell script and lookup table solution.

0 Karma

Motivator

Enhancement case #464044

0 Karma

Splunk Employee
Splunk Employee

i played with this idea before the alert action framework came out, and we achieved alert clearing with the script triggering snmp by using an alert to trigger, then kvstore to flip a state field, then had an inverse search that checked for the alert to be triggered, then watch for the condition to drop below again then it would trigger....

I am sure you could do similar with the slack api...

I'll play with this idea again and see if I can tell a good story for an enhancement request.

Esteemed Legend

So in essence you built your own alerting framework that you can completely control. That is one way around limitations in the built-in framework!

0 Karma

Splunk Employee
Splunk Employee

Ah, therein lies the beauty I see in Splunk. You are only truly limited by what you can crank out.

some see request a feature...some just build it...

Much easier to convince ppl to build something when you show a working example 😉

Esteemed Legend

You would like to clear the alert or clear the throttle? I don't see much point in the former but the latter could be potentially useful. You should be able to use the REST API to delete the alerts based on an additional search that looks for the clear criteria.

Motivator

@woodcock, I was only using the throttle example because it also has a checkbox in the alerting section. I'll edit as not to cause confusion. Thanks.

0 Karma

Motivator

My main goal is to send out an all clear notice after x negative matches on the alert criteria which would then have the same notification options as the original alert. I imagine some would also like to see an auto-delete/acknowledgement after something has self-resolved.

State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!