In regard to your questions: Would such an event produce an alert, regardless of _time, if the search relies on DM/tstats? If an event comes in late (i.e. _indextime is way after _time), as long as you search over those events based on _time after it was indexed (aka the time picker or using earliest/latest criteria), then you would still find that data with your query. Would it be better instead to use the _index_earliest / _index_latest approach? Not sure if it is necessarily better, but I have typically tried to solve things looking at the _time data first, and if there is a lot of indexing lag happening then that is indicative of something that needs to be addressed in the overall observability architecture and accounted for. Is there a better and/or simpler approach? I'm not sure if this is the best way...but one way. 🙂 Based on what I understand I would probably split this into two separate alerts. It also sounds like this is a pretty critical situation you want to keep an eye on. 1. The alert for the file hash, running every 15m, but I would make the lookback more than just 15m. If an event comes in late that was missed by the last run of this alert then even if _indextime is now, _time should be back then when you missed it. So if you're looking back farther each execution of the schedule, you'd still catch the "new" event that showed up late if you're basing things off _time. 2. A second alert can be created that watches the spread of _time and _indextime for that data/sourcetype/source/host/whatever. I've done this before in critical situations as a pre-warning that "stuff could be going bad." You could keep track of a long-term and short-term median of that _indextime-_time difference. If you summarize this (or, since it is so simple even just throw it in a lookup) you can include this info in the alert for #1 as a "We didn't find anything bad...but things might be heading in that direction based on index delay..."
... View more