I frequently envoke on my search head against a indexer cluster with 10 members:
index= | dedup splunk_server | table splunk_server
If my search is less than 10, it's usually an indicator of a indexer problem. Today, I happened to envoke my command, saw that the indexer count was only 9. When I ran a follow-on search of index=_internal host= there was a gap/stoppage in the data.
After searching around looking for a gap in the data (any indexed data), a crash, a restart, a stop and a start, I am unable to find corroborating evidence that splunk was ever down, if in fact it ever was. The whole/gap in the data is gone. I was not able to hop on the VM with my unix credentials to look around.
Any words of wisdom on why the initial search "index= | dedup splunk_server | table splunk_server" was missing an indexer. Is there merit in this search and/or is this some other quick method to see what's happening. Due to constraints of my access, I only had the search head to work with -- other components of my environment are not available as a remote user. Thank you.
Your response was very useful. Thank you.
Try one of these instead:
| tstats count where index=* by splunk_server, _time
| tstats count where index=_internal by splunk_server, _time
Or build upon them to get the data you are looking for.
Glad to hear that solved it for you. If it did, please "accept" my answer.