Splunk Enterprise

Splunk Nodes Restart

acavenago
Explorer

 

Hello,
we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B.

We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Indexer and three HF.
Site B contains one SH, three Indexer and one HF and will be updated later.

Considering that the patching of OS will require a restart of the nodes, can you please tell me Splunk Best Practice to restart the Splunk nodes?
I'd start with the SH nodes then the Indexer nodes, Deployer, MN and HF. All one by one.

Do I have to enable maintenance mode on each node, restart the node and disable maintenance mode, or is it sufficient to stop Splunk on each node and restart the machine?


Thank you,
Andrea

Labels (3)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Hi

Usually I did it by one layer at time sh, IDX etc. on SH layer there is usually no need to set nodes first in detention and then reboot, but you need to do it as your splunk usage needs it.

Also if indexers restart quickly max. couple of minutes then just extend (if needed) timeouts for detecting nodes downtime to avoid unnecessary switching of primaries to another node. Of course it’s good to put MN to maintenance mode before you restart each node one by one when it’s needing reboot. Usually I keep splunk up and running until it’s time for reboot. After all OSs have updated and restarted then disable MN’s maintenance mode and wait that needed repair actions has done.

r. Ismo

View solution in original post

acavenago
Explorer

Hi isoutamo,

sorry for the dumb question, but I have to put only MN in maintenance mode or also the other nodes (except SH)?

Do I have also to stop Splunk manually or it is automatically stopped during the OS shutdown?

 

Thank you,

Andrea

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You should put only MN to maintenance mode. That control also indexers.

If you have installed splunk correctly and enabled boot start, then it should works as a regular reboot. Of course if you want you could stop splunk before that. In these case you should use "splunk stop" (e.g. systemctl stop Splunkd) instead of "splunk offline" as you normally use when you want to move primaries to another node.

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

Usually I did it by one layer at time sh, IDX etc. on SH layer there is usually no need to set nodes first in detention and then reboot, but you need to do it as your splunk usage needs it.

Also if indexers restart quickly max. couple of minutes then just extend (if needed) timeouts for detecting nodes downtime to avoid unnecessary switching of primaries to another node. Of course it’s good to put MN to maintenance mode before you restart each node one by one when it’s needing reboot. Usually I keep splunk up and running until it’s time for reboot. After all OSs have updated and restarted then disable MN’s maintenance mode and wait that needed repair actions has done.

r. Ismo

Get Updates on the Splunk Community!

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...

Edge Processor Scaling, Energy & Manufacturing Use Cases, and More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Get More Out of Your Security Practice With a SIEM

Get More Out of Your Security Practice With a SIEMWednesday, July 31, 2024  |  11AM PT / 2PM ETREGISTER ...