Alerting

How do Splunk ES create incidents from notable events?

hettervik
Builder

Hi,

How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to the "Incident Review" dashboard in Splunk ES? As far as I know the incidents exists in a KV store collection, and I would then assume that there is some scheduled job that take notable events from the "notable" index, and puts them in the KV store collection.

The reason I'm asking is that we are missing incidents in our "Incident Review" dashboard, but the corresponding notable events exists in the notable index. So it looks like the "notable event to incident" job has failed somehow. Is this documented somewhere in more detail?

Labels (1)
0 Karma
1 Solution

tscroggins
Influencer

Hi @hettervik,

The Incident Review Notables table is driven by the "Incident Review - Main" saved search. The search is invoked using parameters/filters from the dashboard:

| savedsearch "Incident Review - Main" time_filter="" event_id_filter="" source_filter="" security_domain_filter="" status_filter="" owner_filter="" urgency_filter="" tag_filter="" type_filter="" disposition_filter=""

I don't believe this is directly documented, but all Splunk ES components are shipped in a source-readable form (saved searches, dashboards, Python modular inputs, etc.). The searches may be discussed in the latest revision of the Administering Splunk Enterprise Security course; my last training course was circa Splunk ES 6.1.

As a starting point, I would expand the time range and review Notable Event Suppressions under Configure > Incident Management > Notable Event Suppressions. See https://docs.splunk.com/Documentation/ES/latest/Admin/Customizenotables#Create_and_manage_notable_ev... for more information.

Following that, I would verify the get_notable_index macro value and permissions haven't been modified. The macro is defined in the SA-ThreatIntelligence app:

[get_notable_index]
definition = index=notable

with the following default export settings:

[]
access = read : [ * ], write : [ admin ]
export = system

View solution in original post

tscroggins
Influencer

Hi @hettervik,

The Incident Review Notables table is driven by the "Incident Review - Main" saved search. The search is invoked using parameters/filters from the dashboard:

| savedsearch "Incident Review - Main" time_filter="" event_id_filter="" source_filter="" security_domain_filter="" status_filter="" owner_filter="" urgency_filter="" tag_filter="" type_filter="" disposition_filter=""

I don't believe this is directly documented, but all Splunk ES components are shipped in a source-readable form (saved searches, dashboards, Python modular inputs, etc.). The searches may be discussed in the latest revision of the Administering Splunk Enterprise Security course; my last training course was circa Splunk ES 6.1.

As a starting point, I would expand the time range and review Notable Event Suppressions under Configure > Incident Management > Notable Event Suppressions. See https://docs.splunk.com/Documentation/ES/latest/Admin/Customizenotables#Create_and_manage_notable_ev... for more information.

Following that, I would verify the get_notable_index macro value and permissions haven't been modified. The macro is defined in the SA-ThreatIntelligence app:

[get_notable_index]
definition = index=notable

with the following default export settings:

[]
access = read : [ * ], write : [ admin ]
export = system

hettervik
Builder

Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exceed the limit". Seems like this one too big event have caused bugs in the "Incident Review - Main" search, which also caused other incidents to fail to load.

We are deleting the event and fixing the correlation search now, to add a fail-safe to avoid creating this big notable events in the future. Hope this fixes the issue!

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...