Alerting

Best Practice for Alert Time Range and Cron Expression

dstuder
Communicator

I'm setting up an alert that I want to run every five minutes so I set the cron expression like such "*/5 * * * *". If I set the time range to last five minutes is it possible that I could miss events? Does Splunk make sure that the two sync up? I assume it is possible that the cron iteration could be slightly off (drift) from the last iteration thus there could be a few seconds where the time range would not apply as the cron was not totally in sync for each iteration. And I correct in this assumption? If so what is the best way to do something like this?

1 Solution

adonio
Ultra Champion

hello there

will recommend to set a strict time window on your search and verify how long your search takes to complete
maybe something like, earliest = -7m@m latest = -2m@m
this will guarantee you will not miss an event

hope it helps

View solution in original post

adonio
Ultra Champion

hello there

will recommend to set a strict time window on your search and verify how long your search takes to complete
maybe something like, earliest = -7m@m latest = -2m@m
this will guarantee you will not miss an event

hope it helps

dstuder
Communicator

Ok, maybe not setting it to "All time" now that I think about it (that's just crazy talk) but maybe sometime like last seven days or something.

0 Karma

sloshburch
Splunk Employee
Splunk Employee

You're thinking about this all in a very healthy manner. Good job!

Essentially, the data COULD come in delayed. You could use the difference between indextime and time to get confidence of the drift. If the drift in your environment is large, then you probably want to investigate that because such a large drift would unmine the confidence in any Splunk insight. But if the drift is manageable then you may feel confidence with setting the time selector to something like the last hour and using indextime to ensure you catch everything. Alternatively, if you know drift is, at most, a few minutes, then you could use the dynamic snap-to to run your search over a sufficiently long ago _time.

I think I'm just articulating what you already knew though.

0 Karma

dstuder
Communicator

Thinking about this a bit I think I would want to alert based on indexed time not event time. For instance my alert is pulling from Windows event logs. If an event happened matching the pattern and say the Splunk Forwarder was not running for more than five minutes when the event came in to the indexer I would not be alerted as earliest and latest are based on _time right? Should I then include the time range in the search string itself and base it on _indextime and set Time Range in the alert to All Time?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Upvoted. That is also what I do. Use the snap-to which will ensure you cover what your brain intended.

More details at: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SearchTimeModifiers#How_to_speci...

Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...