Splunk ITSI

Alerts showing late in Episode Review

guptap2
New Member

itsi_tracked_alerts showing the correct time of events, however itsi_grouped_alerts showing event after 15-20 min. Which is resulting in a late view of alerts in Episode Review?

index=itsi_grouped_alerts sourcetype="itsi_notable:group" Garbage Collection "f7a3cdb2c5a1bf1108305ea0"
5/28/20
9:16:38.000 AM

{ [-]
ArchiveMon: NO

ConfigurationItem: GOE Hybris Admin Europe 2
CustomUrl: http://monspkprdci05:8000/en-US/app/itsi/dynatrace_dashboard?form.kpi=*Garbage Collection*&form.service=hybadm&form.region=eu2

IsStartForAutomation: false

SupportGroupName: GOE_AO_TA_Accenture

aggregated: true
alert_value: 2

automation: FALSE

count: 2

index=itsi_grouped_alerts sourcetype="itsi_notable:group" Garbage Collection "f7a3cdb2c5a1bf1108305ea0"
5/28/20
9:04:17.769 AM

{ [-]
ArchiveMon: NO

ConfigurationItem: GOE Hybris Admin Europe 2
CustomUrl: http://monspkprdci05:8000/en-US/app/itsi/dynatrace_dashboard?form.kpi=*Garbage Collection*&form.service=hybadm&form.region=eu2

IsStartForAutomation: false

SupportGroupName: GOE_AO_TA_Accenture

aggregated: true
alert_value: 1

automation: FALSE

count: 2

Labels (1)
0 Karma

muhammad_luthfi
Path Finder

Hi @guptap2 ,

 

are you resolving the case ? i have same case also & impacting the auto resolve ticket because based on the event time.

 

Please share if any suggestion from your experience.

 

Thanks.

0 Karma

muhammad_luthfi
Path Finder

Just got the way to fixed it, actually we check Episodes Processing Times located on "Event Analytics Monitoring" on ITSI Dashboard. Or just try bellow query :

`itsi_grouped_alerts_index` OR `itsi_tracked_alerts_index` | rename _indextime as it
              | stats earliest(it) as it by index event_id | xyseries event_id index it
              | search itsi_grouped_alerts=* AND itsi_tracked_alerts=* | eval latency=itsi_grouped_alerts-itsi_tracked_alerts
              | fields itsi_tracked_alerts latency | bin itsi_tracked_alerts span=10m
              | stats p99(latency) as "99th Percentile Time" min(latency) as "Min Elapsed Time" median(latency) as "Median Elapsed Time" max(latency) as "Max Elapsed Time" by itsi_tracked_alerts
              | rename itsi_tracked_alerts as _time

And go to the Job Manager, and search itsi_event_grouping [real-time], i just delete the job to refresh it, because the time is very huge.

After that, i can see new job created & processing time decreased,

Why processing time increase on 7 March ? actually we have incident on splunk which is creating 1000++ alert, seems after that processing time increased.

 muhammad_luthfi_2-1741976644907.png

muhammad_luthfi_0-1741976276305.png

 

0 Karma
Get Updates on the Splunk Community!

Prove Your Splunk Prowess at .conf25—No Prereqs Required!

Your Next Big Security Credential: No Prerequisites Needed We know you’ve got the skills, and now, earning the ...

Splunk Observability Cloud's AI Assistant in Action Series: Observability as Code

This is the sixth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...

Splunk Answers Content Calendar, July Edition I

Hello Community! Welcome to another month of Community Content Calendar series! For the month of July, we will ...