I'm having a problem right now where I'm not seeing an even distribution across my indexers. I have 21 indexers (indexer04-indexer24) to which data is coming from six heavy forwarders.
My outputs.conf on my heavy forwarders looks like this:
[tcpout:myServerGroup]
autoLBFrequency=15
autoLB=true
disabled=false
forceTimebasedAutoLB=true
writeTimeout=30
maxConnectionsPerIndexer=20
server=indexer04:9996,indexer05:9996,indexer05:9996,<snip>,indexer24:9996
However, when I run a simple test search, for example
index=main earliest=-1h@h latest=now() | stats count by splunk_indexer | sort count desc
The event count is massively disproportionate across all the indexers, and indexer13 has twice the events of the next busiest indexer, and the least busy indexers have only a sixth of the events that indexer13 has. Likewise, our external hardware monitoring reflects indexer13 having a heavier load.
I've stopped indexer13 temporarily, and the other indexers pick up the slack, but immediately after turning on indexer13 it began being the king of traffic again.
I've broken it down by heavy-forwarder, and every single one of them seems to send more events to indexer13 as well. I'm at a loss, indexer04-indexer24 all share the same configuration, though indexer13-24 are beefier on the hardware side as they are newer builds.
Are there any settings I'm perhaps missing to get this evenly distributed to my indexers?
... View more