We have a ton of indexes and need to better understand which ones have stopped receiving events so that we can report and alert on them.
We have a Splunk Enterprise v7.3.3 distributed environment with multiple (non-clustered) indexers, and non-pooled search heads configured in standalone mode. Our DSV, SH, and ES are each individual hosts and our ES is configured as a secondary SH. We manage index changes via CLI edits of indexes.conf, a deployment app, and redeployment of server classes.
We currently use the below in a dashboard panel, which generates a list of all "0-count" indexes that haven't received events in over 24 hours, but as a static list, there's a lot of additional work to get a holistic view of what's changed and when. I'd prefer query logic over a new app, as we're already hoping to pare down some of (our own) 'bloat.'
## generates a list of all "0-count" indexes that haven't received events in over 24 hours...
|tstats count where (index=* earliest=-24h latest=now()) by index
|append [|inputlookup index_list.csv |eval count=0]
|stats max(count) as count by index