Deployment Architecture

What is the correct procedure to patch the OS and reboot host servers in an indexer clustering environment?

Explorer

We have 1 master and 3 peers all running RHEL 7. My question is, what is the correct procedure to patch the OS and reboot the host servers? Here are the steps I took:

Put cluster in maintenance mode
Patch indexer cluster master node and reboot
Put cluster back in maintenance mode
Patch the search peers
Restart the peer servers one at a time
Take cluster out of maintenance mode

Is this the correct procedure? When completed, the number of buckets rose from 13 to 29, is this expected?

Esteemed Legend

The first step was not necessary but your process is fine otherwise. The reason for the increase in buckets is that whenever Splunk stops/restarts, it closes all open hot buckets (at least 1 for every index on every indexer). This is normal.

0 Karma

Communicator

You can patch your cluster master at last, so that you don't have to enable maintenance mode twice.

0 Karma

Revered Legend

You can follow the same sequence as for Splunk Upgrade of Indexer cluster. Details here http://docs.splunk.com/Documentation/Splunk/6.2.0/Indexer/Upgradeacluster#Upgrade_to_a_new_maintenan...

0 Karma

Explorer

I followed the directions in the link you posted. Is it normal for the "Buckets" numbers to rise every time the servers are rebooted? Each peer had 12 buckets, and after rebooting twice, they now have 29.

0 Karma

Revered Legend

I believe the reboot causes cluster master to rebalance the primary buckets. See this

http://docs.splunk.com/Documentation/Splunk/6.2.6/Indexer/Rebalancethecluster

Can you run the | dbinspect index=YourIndexName from the search to check different buckets and their type/sizes. This just to check if reboot has caused Hot buckets to roll to Warm before they reach the maximum size.

0 Karma