@victorcorrea Have a look at the time modifiers for the concept of 'snap to', which is the @ component of a time constraint. Generally with an alert, it is a good idea to understand whether you have any "lag" in data being generated by a source and then arriving and being indexed in Splunk. Consider an event generated at 6:59:58 by a system, which is sent to Splunk at 7:00:02 and is indexed at 7:00:03. If your alert runs at 7am and searches earliest=-5m@m latest=@m then that event that has a time stamp of 6:59:58 will not yet be indexed in Splunk, so it will not be found by your alert. If this is one of your "Waiting" events, then you may trigger an alerts for a count of 2, but if you look later at that data, you will find the count is actually 3, because that latest event is now in the index. So, consider whether this is an issue for your alert - you can discover lag by doing index=foo
| eval lag=_indextime-_time
| stats avg(lag) if lag is significant, then shift your 5 minute time windows back sufficiently so you do not miss events.
... View more