Alerting

How to handle delayed events in Splunk Alerts

VatsalJagani
SplunkTrust
SplunkTrust

How to best choose time-range to handle the delayed events for Splunk alerts to ensure that no events got skipped and no events are repeated effectively.

Labels (2)
0 Karma
1 Solution

VatsalJagani
SplunkTrust
SplunkTrust

There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicated to use effectively across all the alerts.

Here is the solution that I have been effectively using in many Splunk environments that I work on:

  1. If the events are not expected to be delayed much (example: UDP inputs, Windows inputs, File Monitoring)
    1. earliest=-5m@s latest=-1m@s
    2. earliest=-61m@m latest=-1m@m
    3. Usually any events could be delayed by few seconds for many different reasons, so I found safe to use latest time as 1 min before now.
  2. If the events are expected to be delayed by much more (example: python based inputs, custom Add-ons)
    1. earliest=-6h@h latest=+1h@h _index_earliest=-6m@s _index_latest=-1m@s
    2. Here I always prefer to use index-time as primary reference for few reasons:
      1. So alert triggers to nearby time when event appears in Splunk
      2. We don't miss any events
      3. We cover events even if it delayed few hours and more
      4. We also cover events if it contains future timestamp just in case
    3. We are also adding earliest and latest along with index-time search, because,
      1. Using all-time, makes search so much slower
      2. With earliest_time, you can add what you expect events to get delayed maximum amount of time
      3. With latest_time, you can add if you expect events to come with future time-stamp.

 

Please let me know if I'm missing any scenarios. Or paste any other solution that you have for other users on the community.

View solution in original post

0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicated to use effectively across all the alerts.

Here is the solution that I have been effectively using in many Splunk environments that I work on:

  1. If the events are not expected to be delayed much (example: UDP inputs, Windows inputs, File Monitoring)
    1. earliest=-5m@s latest=-1m@s
    2. earliest=-61m@m latest=-1m@m
    3. Usually any events could be delayed by few seconds for many different reasons, so I found safe to use latest time as 1 min before now.
  2. If the events are expected to be delayed by much more (example: python based inputs, custom Add-ons)
    1. earliest=-6h@h latest=+1h@h _index_earliest=-6m@s _index_latest=-1m@s
    2. Here I always prefer to use index-time as primary reference for few reasons:
      1. So alert triggers to nearby time when event appears in Splunk
      2. We don't miss any events
      3. We cover events even if it delayed few hours and more
      4. We also cover events if it contains future timestamp just in case
    3. We are also adding earliest and latest along with index-time search, because,
      1. Using all-time, makes search so much slower
      2. With earliest_time, you can add what you expect events to get delayed maximum amount of time
      3. With latest_time, you can add if you expect events to come with future time-stamp.

 

Please let me know if I'm missing any scenarios. Or paste any other solution that you have for other users on the community.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...