Wondering if someone has gone through a hardware migration of a clustered indexers environment. Long story short, we want to move to a new platform and abandon the current hardware due to several issues that we are having with it.
All data is local to the indexers. We are running Splunk Enterprise 6.3.3.
So this is what we would like to do:
- Start a rsync from a peer A to the new host B, warm and cold buckets (peer A online)
- Either:
I)- Put CM on maintenance mode, to avoid extra replication work since the new node will have the data?
- stop peer A (and remove later after having the new peer joined with peer A data; disable maintenance mode)
OR
II-)- Remove peer A from cluster:
- splunk offline --enforce-counts
- Final rsync job to sync the latest changes on hot and cold volumes from A to host B
- Have a copy of the clustered indexes.conf loaded to the etc/system/local/ of host B before starting it
- Install same version of splunk and start splunk on host B
- Add host B to the cluster and restart host B
- Push new outputs.conf to all UFs and HFs with the new addresses of the peer B, and removing peer A
- If all goes well, then we permanently remove the old peer A:
- splunk remove cluster-peers -peers
- Repeat until all peers are removed from Cluster and moved to the new hosts.
Do you guys think this can work? Any suggestions / recommendations with this draft plan, specially regarding the options I and II above?
Thanks a lot!
The preferred procedure is
1) Install Splunk on the new hardware and configure it to match the old indexers
2) Add the new indexers to the cluster
3) Put the old indexers into Detention (not an option in 6.3.3)
4) Issue a 'splunk offline --enforce-counts' command to ONE old indexer
5) Wait for the buckets to migrate off the old indexer. Depending on the number of buckets, this could take a while.
6) Repeat steps 4-5 for the remaining old indexers.
7) Once all buckets are moved to the new indexers you can remove the old indexers from the cluster.
Note there is no need for rsync. The cluster automatically copies data between indexers.
The preferred procedure is
1) Install Splunk on the new hardware and configure it to match the old indexers
2) Add the new indexers to the cluster
3) Put the old indexers into Detention (not an option in 6.3.3)
4) Issue a 'splunk offline --enforce-counts' command to ONE old indexer
5) Wait for the buckets to migrate off the old indexer. Depending on the number of buckets, this could take a while.
6) Repeat steps 4-5 for the remaining old indexers.
7) Once all buckets are moved to the new indexers you can remove the old indexers from the cluster.
Note there is no need for rsync. The cluster automatically copies data between indexers.
Hi great answer, just a small follow-up.
What happens if the indexer was convertet to a cluster-member, will this approach work for the non-clustered data on the indexer?
Kind regards
Lars
No, non clustered indexes are not replicated - even with enforce-counts.
You would have to manually move those buckets onto a new indexer.
We plan the same, but our idea is to add the new nodes to the cluster and then remove node after node, giving the cluster just enough time so it can redistribute the buckets.
Yes that's a good plan too. We are still considering do a one peer migration following the plan above and if that doesn't work well we may add all peers at once and start shutting down the old peers with the CM. Thanks for sharing.