One thing to check is if there has been a delay in some of the data getting into the index. A delay in the data coming in may result in the alert running and not finding anything, thereby triggering your email alert action, but after the alert as been run, the data does eventually arrive in Splunk. For example, lets say you have an alert that looks across 5 mins of data, and it runs ever 5 mins. So the alert runs at 12:00 and checks for data from 11:55 - 12:00. Now when the search runs at 12:00 it finds no results, and then triggers your email alert action. However, at 12:01, some data that is timestamped from 11:58 arrives in Splunk. But at that point, it is too late for the alert, as it already ran at 12:00 and checked for data from 11:55 - 12:00 and found no data, and hence triggered the email alert action. Now, lets say at 12:15, you go back and manually run the same search for 11:55 - 12:00, the search DOES find the data timestamped as 11:58. That is because the data did eventually arrive in Splunk, but it just arrived at 12:01, which was too late for the alert to see it. The way you can check if there is a delay in your data is running a basic event search (no stats or timechart, etc) over any time range, and then adding the following: | eval index_delay=_indextime-_time
| where index_delay>10
| eval index_time=_indextime
| convert ctime(index_time) Now you will see only events which were delayed by at least 10 seconds in arriving in Splunk. If you want to see data that was delayed by 30 seconds, then change the index_delay>10 to index_delay>30. Additionally, you will have a human readable time field called index_time which is the time that the event was actually indexed. If an event has an index_time that is AFTER the time that your alert ran, then it's likely that your alert never saw that event. I hope this helps!
... View more