Splunk ITSI

Alerts showing late in Episode Review

guptap2
New Member

itsi_tracked_alerts showing the correct time of events, however itsi_grouped_alerts showing event after 15-20 min. Which is resulting in a late view of alerts in Episode Review?

index=itsi_grouped_alerts sourcetype="itsi_notable:group" Garbage Collection "f7a3cdb2c5a1bf1108305ea0"
5/28/20
9:16:38.000 AM

{ [-]
ArchiveMon: NO

ConfigurationItem: GOE Hybris Admin Europe 2
CustomUrl: http://monspkprdci05:8000/en-US/app/itsi/dynatrace_dashboard?form.kpi=*Garbage Collection*&form.service=hybadm&form.region=eu2

IsStartForAutomation: false

SupportGroupName: GOE_AO_TA_Accenture

aggregated: true
alert_value: 2

automation: FALSE

count: 2

index=itsi_grouped_alerts sourcetype="itsi_notable:group" Garbage Collection "f7a3cdb2c5a1bf1108305ea0"
5/28/20
9:04:17.769 AM

{ [-]
ArchiveMon: NO

ConfigurationItem: GOE Hybris Admin Europe 2
CustomUrl: http://monspkprdci05:8000/en-US/app/itsi/dynatrace_dashboard?form.kpi=*Garbage Collection*&form.service=hybadm&form.region=eu2

IsStartForAutomation: false

SupportGroupName: GOE_AO_TA_Accenture

aggregated: true
alert_value: 1

automation: FALSE

count: 2

Labels (1)
0 Karma

muhammad_luthfi
Path Finder

Hi @guptap2 ,

 

are you resolving the case ? i have same case also & impacting the auto resolve ticket because based on the event time.

 

Please share if any suggestion from your experience.

 

Thanks.

0 Karma

muhammad_luthfi
Path Finder

Just got the way to fixed it, actually we check Episodes Processing Times located on "Event Analytics Monitoring" on ITSI Dashboard. Or just try bellow query :

`itsi_grouped_alerts_index` OR `itsi_tracked_alerts_index` | rename _indextime as it
              | stats earliest(it) as it by index event_id | xyseries event_id index it
              | search itsi_grouped_alerts=* AND itsi_tracked_alerts=* | eval latency=itsi_grouped_alerts-itsi_tracked_alerts
              | fields itsi_tracked_alerts latency | bin itsi_tracked_alerts span=10m
              | stats p99(latency) as "99th Percentile Time" min(latency) as "Min Elapsed Time" median(latency) as "Median Elapsed Time" max(latency) as "Max Elapsed Time" by itsi_tracked_alerts
              | rename itsi_tracked_alerts as _time

And go to the Job Manager, and search itsi_event_grouping [real-time], i just delete the job to refresh it, because the time is very huge.

After that, i can see new job created & processing time decreased,

Why processing time increase on 7 March ? actually we have incident on splunk which is creating 1000++ alert, seems after that processing time increased.

 muhammad_luthfi_2-1741976644907.png

muhammad_luthfi_0-1741976276305.png

 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Level Up Your .conf25: Splunk Arcade Comes to Boston

With .conf25 right around the corner in Boston, there’s a lot to look forward to — inspiring keynotes, ...

Manual Instrumentation with Splunk Observability Cloud: How to Instrument Frontend ...

Although it might seem daunting, as we’ve seen in this series, manual instrumentation can be straightforward ...

Take Action Automatically on Splunk Alerts with Red Hat Ansible Automation Platform

Ready to make your IT operations smarter and more efficient? Discover how to automate Splunk alerts with Red ...