Splunk Enterprise

Splunk Nodes Restart

acavenago
Explorer

 

Hello,
we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B.

We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Indexer and three HF.
Site B contains one SH, three Indexer and one HF and will be updated later.

Considering that the patching of OS will require a restart of the nodes, can you please tell me Splunk Best Practice to restart the Splunk nodes?
I'd start with the SH nodes then the Indexer nodes, Deployer, MN and HF. All one by one.

Do I have to enable maintenance mode on each node, restart the node and disable maintenance mode, or is it sufficient to stop Splunk on each node and restart the machine?


Thank you,
Andrea

Labels (3)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Hi

Usually I did it by one layer at time sh, IDX etc. on SH layer there is usually no need to set nodes first in detention and then reboot, but you need to do it as your splunk usage needs it.

Also if indexers restart quickly max. couple of minutes then just extend (if needed) timeouts for detecting nodes downtime to avoid unnecessary switching of primaries to another node. Of course it’s good to put MN to maintenance mode before you restart each node one by one when it’s needing reboot. Usually I keep splunk up and running until it’s time for reboot. After all OSs have updated and restarted then disable MN’s maintenance mode and wait that needed repair actions has done.

r. Ismo

View solution in original post

acavenago
Explorer

Hi isoutamo,

sorry for the dumb question, but I have to put only MN in maintenance mode or also the other nodes (except SH)?

Do I have also to stop Splunk manually or it is automatically stopped during the OS shutdown?

 

Thank you,

Andrea

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You should put only MN to maintenance mode. That control also indexers.

If you have installed splunk correctly and enabled boot start, then it should works as a regular reboot. Of course if you want you could stop splunk before that. In these case you should use "splunk stop" (e.g. systemctl stop Splunkd) instead of "splunk offline" as you normally use when you want to move primaries to another node.

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

Usually I did it by one layer at time sh, IDX etc. on SH layer there is usually no need to set nodes first in detention and then reboot, but you need to do it as your splunk usage needs it.

Also if indexers restart quickly max. couple of minutes then just extend (if needed) timeouts for detecting nodes downtime to avoid unnecessary switching of primaries to another node. Of course it’s good to put MN to maintenance mode before you restart each node one by one when it’s needing reboot. Usually I keep splunk up and running until it’s time for reboot. After all OSs have updated and restarted then disable MN’s maintenance mode and wait that needed repair actions has done.

r. Ismo

Get Updates on the Splunk Community!

Detecting Remote Code Executions With the Splunk Threat Research Team

REGISTER NOWRemote code execution (RCE) vulnerabilities pose a significant risk to organizations. If ...

Observability | Use Synthetic Monitoring for Website Metadata Verification

If you are on Splunk Observability Cloud, you may already have Synthetic Monitoringin your observability ...

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...