The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopp...
See more...
The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopping a Splunk cluster environment and bring them up. By shutting down the data forwarding tier first is a good idea , otherwise the data will be lost nowhere to go. Place the CM in maintenance mode Shutdown Deployment Server / HF ‘s if in use as well Shutdown SHC - take note of the SHC Captain – Stop the SHC Members and Captain Last, make sure they are down. Shutdown Deployer. As the CM should be in maintenance mode via CM, shutdown shut down the indexers by the way of the normal commands should be fine(/opt/splunk/bin/stop), one at a time and make sure they are down. Shutdown the CM. On the reverse make sure CM is up and it’s still in maintenance mode, bring all the indexers up and when they are all up disable the maintenance mode – check status using MC, the replication factors should searchable be green status, so you may have to wait a bit. Bring back the Deployer back up Then bring the SH's up one by one, ensure the captain is up first, then the other SHC members and check the others can communicate with it, using SHC clusters commands to check status Bring back the Deployment Server / HF’s Bring back the data forwarding tier. Use the MC to check overall health. I would document all the steps and commands clearly, so you have a process to follow and checkpoint, rather than in an ad-hoc manner due to the many moving parts.