Alerting

How to configure a scheduled alert to trigger one email whenever a specific event is found?

Communicator

We've been using real time alerts to send us an email whenever a specific log/event is hit. However we only have 4 CPU cores and can only run 4 real time alerts.

What is the best configuration to set up a scheduled alert to run every minutes so we get 1 email every time a new log is triggered?

I'm getting stuck because it's sending lots of emails each time an alert is triggered.

My criteria is 1 new log 1 email sent out.

0 Karma
1 Solution

SplunkTrust
SplunkTrust

So... you want one email per matching event?

Open your search page, run the search you want the alert to run. Add _index_earliest=-2m@m _index_latest=-m@m to your search to make sure you look at every event exactly once. Set the time range to however long you expect your maximum indexing delay to be. Click Save As -> Alert. Choose Scheduled -> Cron Schedule -> * * * * * to run every minute. Make sure your time range was retained. Set trigger to number of results greater than zero. Trigger for each result. Add email action.

A HUGE word of warning: This can (and will!) lead to floods of emails. I don't recommend sending out one email per matched event, ever. Always add some kind of aggregation or throttling.

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

So... you want one email per matching event?

Open your search page, run the search you want the alert to run. Add _index_earliest=-2m@m _index_latest=-m@m to your search to make sure you look at every event exactly once. Set the time range to however long you expect your maximum indexing delay to be. Click Save As -> Alert. Choose Scheduled -> Cron Schedule -> * * * * * to run every minute. Make sure your time range was retained. Set trigger to number of results greater than zero. Trigger for each result. Add email action.

A HUGE word of warning: This can (and will!) lead to floods of emails. I don't recommend sending out one email per matched event, ever. Always add some kind of aggregation or throttling.

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

The time ranges are two different things. The earliest and latest you set for the alert and in the regular time range picker filters for the event's time, the _time field. _index_earliest and _index_latest filters for the event's indexing time, the _indextime field.

Why does that distinction matter?
Say you want to alert every minute, for the previous minute. A search might run at 11:12:00, searching from 11:11:00 to just before 11:12:00. If an event is generated at 11:11:59 it might arrive in Splunk at 11:12:01 so this search won't find it... and neither will the next search, because it's out of that time range by then.
The lazy fix would be to add a bit of a delay, e.g. earliest=-2m@m to latest=-m@m, but that can only work if you know your maximum indexing delay and your maximum indexing delay is short.
The proper fix would be to largely ignore _time, e.g. earliest=-3d to latest=+d, and primarily filter by indexing time, e.g. _index_earliest=-2m@m to _index_latest=-m@m. That way your primary filter isn't affected by indexing delay, and every incoming event will be looked at by the alert exactly once. Adjust the -3d as fit for your environment.

0 Karma

Communicator

I've added the earliest and latest settings under "edit trigger conditions" which i think does the same as putting it in the search page, yes/no?

Regarding the need to throttling of the alerts, we've not had issues with real time alerts, we know the data/events that come in quite well and it shouldn't be a problem in this situation.

0 Karma