I'm trying to create something that displays long term outages: any index that hasn't had traffic in the last hour.
I've made heartbeat alerts that notify when outages occur, but they're limited to an hour to save resources. After that hour, they drop off the face of the earth and aren't accounted for - this is okay for alerts, but not for a dashboard, where persistence is the goal.
I'm trying to create a search that returns the names of any index that's had 0 logs in the last hour. I have this so far:
| tstats count where [| inputlookup monitoringSources.csv | fields index | format] earliest=-1h latest=now() by index
| where count=0
However, I know this doesn't work, as I have a dummy index name in that .csv file that doesn't exist. If I'm not mistaken, it should be returning the dummy index with a count of 0 (it does not). How could I do this without inflating the search time range past an hour?
You don't get a result for index=dummy because there are no events with index=dummy - Splunk is not good at finding things which aren't there, or to put it another way, Splunk is good at not finding things which aren't there.
| tstats count where index=dummy by index
| appendpipe
[| stats count as _count
| where _count = 0
| eval index="dummy"
| eval count=0]