Deployment Architecture

What is the correct procedure to patch the OS and reboot host servers in an indexer clustering environment?

kornkid42
Explorer

We have 1 master and 3 peers all running RHEL 7. My question is, what is the correct procedure to patch the OS and reboot the host servers? Here are the steps I took:

Put cluster in maintenance mode
Patch indexer cluster master node and reboot
Put cluster back in maintenance mode
Patch the search peers
Restart the peer servers one at a time
Take cluster out of maintenance mode

Is this the correct procedure? When completed, the number of buckets rose from 13 to 29, is this expected?

woodcock
Esteemed Legend

The first step was not necessary but your process is fine otherwise. The reason for the increase in buckets is that whenever Splunk stops/restarts, it closes all open hot buckets (at least 1 for every index on every indexer). This is normal.

0 Karma

season88481
Contributor

You can patch your cluster master at last, so that you don't have to enable maintenance mode twice.

0 Karma

somesoni2
Revered Legend

You can follow the same sequence as for Splunk Upgrade of Indexer cluster. Details here http://docs.splunk.com/Documentation/Splunk/6.2.0/Indexer/Upgradeacluster#Upgrade_to_a_new_maintenan...

0 Karma

kornkid42
Explorer

I followed the directions in the link you posted. Is it normal for the "Buckets" numbers to rise every time the servers are rebooted? Each peer had 12 buckets, and after rebooting twice, they now have 29.

0 Karma

somesoni2
Revered Legend

I believe the reboot causes cluster master to rebalance the primary buckets. See this

http://docs.splunk.com/Documentation/Splunk/6.2.6/Indexer/Rebalancethecluster

Can you run the | dbinspect index=YourIndexName from the search to check different buckets and their type/sizes. This just to check if reboot has caused Hot buckets to roll to Warm before they reach the maximum size.

0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...