My first few attempts at rebalancing were pretty great. No muss, no fuss. They ran for about 12 hours and like magic my cluster was firing on all cylinders. Beautiful.
'Stuff happens' and I'm now in the situation again where I've introduced new servers to the the cluster (replacing old). Now I'm way out of balance. "No problem," says I. "Data rebalancing is awesome."
Not so fast. Literally. I fired it off late Friday night. By Monday morning the process was reading 0.14% done, .01% more than right after starting the process 56 hours previously. By my math that's about 650 days to complete.
I stopped the process, and restarted it for 1 index only -- 648 buckets using 1 TB of disk. After running for the last 18 hours, it's at 3% complete. So slow as to not really be usable.
12 servers in the cluster, 4 are new; all are:
Any suggestions appreciated.
> splunk btool server list clustering | grep max_peer max_peer_build_load = 2 max_peer_rep_load = 5 max_peer_sum_rep_load = 5
Try to use the lower rebalancethreshold on the master to see if the rebalance performance improves. You can rebalance in multiple waves by increasing the rebalancethreshold.
Hah! @twinspop I seem to be following you from https://answers.splunk.com/answers/476015/whats-the-best-method-to-updatereplace-indexer-clu.html
I am in the same spot now and indexer rebalance is painfully slow. Did you find out the cause for this?
My cold storage was on NAS. Since having all stages (hot, warm, cold) on local drives, rebalance is plenty fast.
Hmmm - we have all our storage on local drives - except frozen/archive - which is on NFS. Does frozen/archive drive count?
The problem (apparently) was related to COLD storage being on NAS. Since restructuring our storage plans to get everything on local drives, data rebalance is a quick process.