Alerting
Highlighted

How much of a delay is enough delay for an alert?

Path Finder

I've read that a best practice for setting up a (non real-time) alert in Splunk is to schedule alerts with at least one minute of delay built in, to account for forwarding & indexing delays.

Well, I've got a alert setup to email me an alert whenever I've got a splunkd crashlog showing up anywhere in my environment. This alert runs every 5 minutes, with a 1 minute delay, like so:

Time range earliest: -6m@m latest: -1m@m
Cron schedule */5 * * * *
Condition if # of results > 0

However, I never get an email alert, even when Splunk finds results. Am I just not building enough delay into my alert? Is getting the right amount of delay just a matter of tweaking things until it works?

Thanks!

0 Karma
Highlighted

Re: How much of a delay is enough delay for an alert?

Esteemed Legend

You almost certainly have bad timestamps in your data which are mis-labeling them so that events that really occurred "nowish" are being thrown hours into the future or the past. Install the Data Curator and Meta Woot apps and fix your _time problems. This is a deep topic and we do a TON of PS fixing this for clients. Those apps are by no means the whole story but they are a great first step. This is probably the single biggest (and most important) problem in the wild for Splunk (it is not a problem with the product; it is carelessness and confusion during the onboarding process).

View solution in original post

0 Karma