I want to create alert to check on all indexes event count and alert the list of all indexes that have no events in the last 24 hours
Search for the following and have the alert trigger if the number of events is not zero.
| tstats count where index=* by index | where count = 0
I'm trying to build a similar alert and when I try below for any time frame up to 24 hours I end up with "No results found". I have 4 indices all showing no events received in over 4 hours when looking at Settings > Indexes.
| tstats count where index=* by index | where count = 0
This solution actually worked for me https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-determine-when-a-host-stops-sending-logs-to....