Splunk Enterprise Security

Splunk ES - Incident Review (for SLA)

sherpedz
Loves-to-Learn Lots

Sorry to ask this question if it has been talked about before - I have a Splunk ES installation that we use the "Incident Review" to keep track of incidents and notable events.  As part of this we have a requirement to report an SLA for all notable events.  What we have found is that we can build a search that returns the incident information and link to the notable event (with no problem) but when we then use the search to look at the status_label to see the time the event was logged to the time that it was closed can be an issue if someone then makes a note on an notable after the incident was closed.

Are you able to help me by providing the SLA to only read the first time that we see the notable event status change to closed and ignore all the following status reports of closed to make the SLA work.

`notable`
| search NOT `suppression` info_search_time=*
| eval review_time=mvindex(review_time,0)
| eval response_time=review_time-info_search_time
| eval still_open=if(status_group!="Closed",now()-info_search_time,null)
| eval closed=if(status_group="Closed",1,0)
| eval in_sla=case((urgency=="critical" AND still_open>(3600*2)),1,(urgency=="high" AND still_open>(3600*8)),1,(urgency=="medium" AND still_open>(3600*72)),1,(urgency=="low" AND still_open>(3600*120)),1,(urgency=="informational" AND still_open>(3600*144)),1,1=1,0)
| eval metric_count=case((urgency=="critical" AND (response_time>(3600*2) OR in_sla=1)),1,(urgency=="high" AND (response_time>(3600*8) OR in_sla=1)),1,(urgency=="medium" AND (response_time>(3600*72) OR in_sla=1)),1,(urgency=="low" AND (response_time>(3600*120) OR in_sla=1)),1,(urgency=="informational" AND (response_time>(3600*144) OR in_sla=1)),1,1=1,0)
| stats count sum(metric_count) as metric_met, sum(closed) as closed sum(response_time) as response_sum, avg(response_time) as response_avg, max(response_time) as response_max count(still_open) as open, avg(still_open) as avg_open max(still_open) as max_open sum(in_sla) as sla_ok by urgency
| appendpipe [inputlookup urgency_list.csv]
| dedup urgency
| eval SLA=case(urgency=="critical","4",urgency=="high","8",urgency=="medium","72",urgency=="low","120",urgency=="informational","144")
| eval "SLA Compliance"=round((metric_met*100/count),2), response_avg=tostring(round((response_avg),0),"duration"), response_max=tostring(round((response_max),0),"duration"), avg_open=tostring(round((avg_open),0),"duration"), max_open=tostring(round((max_open),0),"duration"), overdue=open-sla_ok
| eval count=tostring(count,"commas"), closed=tostring(closed,"commas"), open=tostring(open,"commas"), sla_ok=tostring(sla_ok,"commas"), overdue=tostring(overdue,"commas")
| table urgency, SLA, count, "SLA Compliance", closed, response_avg, response_max, open, avg_open, max_open, sla_ok, overdue
| sort SLA
| eval urgency=upper(substr(urgency,1,1)).substr(urgency,2)
| fillnull value="0" count closed open sla_ok overdue
| fillnull value="100.0" "SLA Compliance" response_avg response_max avg_open max_open
| rename urgency as Urgency, SLA as "SLA Target (Hours)", count as "Total Notables", closed as "Closed Notables", response_avg as "Avg. Time to Close (HH:MM:SS)", response_max as "Max. Time to Close (HH:MM:SS)", open as "Open Notables", avg_open as "Avg. Time Open (HH:MM:SS)", max_open as "Max. Time Open (HH:MM:SS)", sla_ok as "Within SLA", overdue as "Overdue"

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...