We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts?
The current plan -
Suspend monitoring of alert on the indexers
(Repeat following for each indexer, one at a time)
Put Splunk Cluster in Maintenance Mode
Stop Splunk service on One Indexer
vMotion the existing 2.5TB disk to any Unity datastores
Provision new 2.5TB VM disk from the VSAN datastore
Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old"
Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data"
Restart Splunk service on indexer
Take Indexer Cluster out of Maintenance Mode
Review Cluster Master to confirm indexer is processing and rebalancing has started as expected
Wait a few minutes to allow for Splunk to rebalance across all indexers
(Return to top and repeat steps for next indexer)
Validate service and perform test searches
Check CM Panel - -> Resources/usage/machine (bottom panel - IOWait Times) and monitor changes in IOWait
Enable monitoring of alert on the indexers
In addition, Splunk PS suggested to use -
splunk offline --enforce-counts
Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation
I'm totally agree with @PickleRick. If you have this in VMware or something similar, why you don't use it to do that storage mgigration? In those cases there is no need to do any actions on Splunk side and you can do this without service breaks. Of course if you haven't needed licenses for your virtualisation / storage layer then it's different story. But I expecting that you have linux VGs in use and you can use those to do this without service breaks or at least with minimum reboot etc.
splunk offline --enforce-counters
should use only when you are removing the whole node permanently from cluster!