When the cluster peers are restarted, primary buckets are be re-assigned by the cluster master. Is there any way to stop the re-assignment from occurring for a planned peer restart?
Replication Factor = 2 and Search Factor = 2
I am running Splunk 6.2.3
I have replication_factor = 2 and search_factor = 2.
According to the documentation "When you start or restart a master or peer node, the master rebalances the set of primary bucket copies across the peers.... to achieve rebalancing, the master reassigns the primary state from existing bucket copies to searchable copies".
If I set search_factor=1, instead of 2, will rebalancing occur when an indexer is restarted and comes up before the heartbeat timeout ? I won't have searchable copies on the other indexers, so shouldn't that stop the primary re-assignment from occurring?
There really should be a property file that we could use to turn rebalancing off or on.
What are you trying to achieve by avoiding reassignment?
I have a replication factor of 2, but am also using data model acceleration. Since the feature to replicate accelerated data model tsidx files is not available in Splunk yet, every indexer restart means that the re-assigned primary buckets need to be re-accelerated. This is a huge resource drain and can take days in a large system. It would be better if the primaries were not re-assigned - then I would not have to rebuild the accelerated data model tsidx files.
I understand this issue well, and empathize. But, data model acceleration replication should be here "soon". For now, I think you're stuck.
Thanks for your response.
Hi lmcmipl, I don't expect so. The cluster master does the primary reassignment as an attempt to maintain as close to 100% searchable data as it can. You can make this process more graceful by doing a "splunk offline" on the peer.
Please let me know if this answers your question 😄