We are integrating Splunk and ServiceNow. We use alert triggers that call the ServiceNow Event Integration trigger (calls snow_event.py. Note we don't presently use snow_incident.py). It is easy enough intergration to send and alert (an event) to ServiceNow system. However, when the alert clears on Splunk we want to clear (close) the event on ServiceNow. How can we in Splunk manage the closing of the previous fired alert when the alert goes away? For example:
We want to send an alert (event) to serviceNow from Splunk when a node (host) has high CPU utilization (severity = critical). When the CPU usage goes back to normal we need to figure out how to send another event into ServiceNow with serverity=0 to resolve the associated incident and closes the alert in Service Now. Does the ServiceNow add-on provide a way to do this? I've gone through documentation and couldn't find a way to do this. Has any one figured out a way to manage the state (using severity) of the alert from Splunk to Service now after event creation?
Thanks for your help!
It sounds like what you are looking for is the snoweventstream.
The following example search creates an incident when CPU usage is 95 or higher. sourcetype="CPURates" earliest=-5m latest=now | stats avg(CPU) as CPU last(_time) as time by host | where CPU>=95 | eval contact_type="email" | eval ci_identifier=host | eval priority="1" | eval category="Software" | eval subcategory="database" | eval short_description="CPU on ". host ." is at ". CPU | snowincidentstream The following example search closes the above incident in ServiceNow version Eureka when CPU usage drops below 15. sourcetype="CPURates" earliest=-5m latest=now | stats avg(CPU) as CPU last(_time) as time by host | where CPU<15 | eval contact_type="email" | eval ci_identifier=host | eval state="7" | eval category="Software" | eval subcategory="database" | eval short_description="CPU on ". host ." is at ". CPU | snowincidentstream
When you create an event with the Alert action it should log the response somewhere in splunk.
The response should have the service now incident / sr/ Cr that you’ve created.
So you would need to run two searches, one that creates the alerts and one that clears the alerts if certain conditions are met and also appends the results of the previous alert so that it has the inc/sr/cr
| eval severity=if(_time>=now()-900 AND (isnotnull(CR) OR isnotnull(SR)),0,1)
So , this is not a question of how to update an existing event, this is a question of how to update specifically the severity of the existing event in SNow when the threshold has been un-crossed ... yes?