So, I've got a weird one. According to my monitoring console, the indexing queues on my search head are all pegged at 100%, and have been for a long time. The thing is, nothing's indexing on the thing. It's forwarding internal logs to my indexers, and I'm not running any Summary indexes on it.
Is there a way to figure out what's blocking it up? It's not a huge priority beyond the fact that the system is slow compared to other search heads, and that my boss wants to figure out why it's flagging; partially for academic reasons. Only thing I really have to go on is that on the search head, it's showing my corporate_security role with read access when I check the Introspection API; something that's not on any other system including other search heads.
Note, I don't have a Search Head cluster, but I am running a clustered pair of Indexers.
So, it looks like the search is coming back empty, however I can do the same for every other instance of Splunk in the deployment. Is it possible that the indexing queues would fill up if it can't forward its internal logs? I mean, I wouldn't think so.
wild guess here,
check available disk space on this particular search head
if theres no space, or very minimal it can prevent splunk from indexing locally and sending the data from the search heads to indexers layer