Alerting

Best Practice for Alert Time Range and Cron Expression

dstuder
Communicator

I'm setting up an alert that I want to run every five minutes so I set the cron expression like such "*/5 * * * *". If I set the time range to last five minutes is it possible that I could miss events? Does Splunk make sure that the two sync up? I assume it is possible that the cron iteration could be slightly off (drift) from the last iteration thus there could be a few seconds where the time range would not apply as the cron was not totally in sync for each iteration. And I correct in this assumption? If so what is the best way to do something like this?

1 Solution

adonio
Ultra Champion

hello there

will recommend to set a strict time window on your search and verify how long your search takes to complete
maybe something like, earliest = -7m@m latest = -2m@m
this will guarantee you will not miss an event

hope it helps

View solution in original post

adonio
Ultra Champion

hello there

will recommend to set a strict time window on your search and verify how long your search takes to complete
maybe something like, earliest = -7m@m latest = -2m@m
this will guarantee you will not miss an event

hope it helps

dstuder
Communicator

Ok, maybe not setting it to "All time" now that I think about it (that's just crazy talk) but maybe sometime like last seven days or something.

0 Karma

sloshburch
Splunk Employee
Splunk Employee

You're thinking about this all in a very healthy manner. Good job!

Essentially, the data COULD come in delayed. You could use the difference between indextime and time to get confidence of the drift. If the drift in your environment is large, then you probably want to investigate that because such a large drift would unmine the confidence in any Splunk insight. But if the drift is manageable then you may feel confidence with setting the time selector to something like the last hour and using indextime to ensure you catch everything. Alternatively, if you know drift is, at most, a few minutes, then you could use the dynamic snap-to to run your search over a sufficiently long ago _time.

I think I'm just articulating what you already knew though.

0 Karma

dstuder
Communicator

Thinking about this a bit I think I would want to alert based on indexed time not event time. For instance my alert is pulling from Windows event logs. If an event happened matching the pattern and say the Splunk Forwarder was not running for more than five minutes when the event came in to the indexer I would not be alerted as earliest and latest are based on _time right? Should I then include the time range in the search string itself and base it on _indextime and set Time Range in the alert to All Time?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Upvoted. That is also what I do. Use the snap-to which will ensure you cover what your brain intended.

More details at: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SearchTimeModifiers#How_to_speci...

Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...