I have six indexers, one search head and a cluster manager on different hardware.
Is there anything I can do about this?
This generally happens if your forwarders sending the data to 3 indexers only and it gets replicated to the other 3 remaining indexers. By default, the indexer that receives the data from forwarder acts as the primary indexer for the data and will answer all search requests.
The best practice recommendation is to spray your data from forwarders to all the indexers in the pool. This will make sure that all indexers are actively participating in the searches and share the load
We use DNS round robin across all six indexers, which are identical physically and according to the S.o.S app the indexed volumes are comparable across the cluster, so I don't think it is that.
Right now we have 5 indexers running mostly idle, with between 7 and 10 splunk processes and one with a load average of 90 and 55 splunk processes (has 32 logical cores).
At other times we have had 2 or three running very hot while the others remain idle which causes major issues with front end searching to the point it is almost unusable.