Monitoring Splunk

Is there any way to equally distribute the storage load on all of the 4 indexers? Does data rebalancing option help here

Path Finder

HI Experts,


we have 4 physical indexers in cluster and since few days /splunk file system storage has reached to threshold on 2 out of 4 indexers.

Is there any way to equally distribute the storage load on all of the 4 indexers? Does data rebalancing option help here?

Labels (1)
Tags (1)
0 Karma


You can rebalance buckets and it will probable help somewhat for som short time but it's worth digging into why such unbalanced storage use occured in the first place.

Outputs in splunk components connect to a randomly chosen single output from a load-balancing group and send their events to this one output until the thresholds are reached (see @gcusello 's reply).

If you - for example - have just one forwarder and you get extremely unlucky, that forwarder may be pushing events to a single indexer for a longer period of time.

This is how splunk's load-balancing works.

Now what can you do about it?

Lowering thresholds is one thing - the more often forwarder chooses where to send data, the more probability that in the long run the distribution among single outputs will be relatively uniform. But that comes at a price of additional overhead of reconnection processing so you have to find a reasonable balance between longer connection duration for performance vs. shorter for load balancing.

The more distinct source forwarders you have, the more probability that as a whole group they will be hiting those indexers uniformly.

And the more indexers you have, the lower probability that the forwarder will hit the same indexer again and again.

So if you had, for example, several indexers ahd many uf's but ingested all events from uf's via a single hf, you'd be cripplimg your load balancing severly.

0 Karma


Hi @sbhatnagar88,

when there are more Indexers ,load should be balanced between Indexers (there's an automatic AutoLoadBalancing) but I found that usually this isn't true.

During a training, an instructor said that if there are less Heavy Forwarders than indexers this could be possible because when an HF starts to send logs to an Indexers it continues until it's available so some Indexers could be less used, but this topic wasn't confirmed by other Splunk people.

Anyway, you can change a parameter to distribute indexers between forwarders (HFs and UFs), you could use in outputs.conf:

connectionTTL = <integer>
* The time, in seconds, for a forwarder to keep a socket connection
  open with an existing indexer despite switching to a new indexer.
* This setting reduces the time required for indexer switching.
* Useful during frequent indexer switching potentially caused
  by using the 'autoLBVolume' setting.
* Default: 0 seconds

for more infos see at



0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...