Deployment Architecture

Splunk server need to be taken care before updating linux patch

singhrakesh
Explorer

Hi All,

Could you please share the steps which need to be perform at Splunk level before updating linux patch and reboot.

Splunk architect consists of
Splunk Cluster master
indexer
search heads
forwarder

Thanks
Rakesh

vliggio
Communicator

This is a fairly easy process since Splunk does not rely on items which would be patched (ie, Splunk bundles its own version of Python, etc). You can patch all your servers first, without taking down Splunk. Then just do a reboot in the following order:

Reboot Cluster Master
Reboot Search Head
Put Cluster Master into maintenance mode
Reboot Indexers one by one, waiting for the buckets to be re-registered and the cluster to be searchable
Take Cluster Master out of maintenance mode

If you don't care about Splunk being searchable during the reboot, you can be a bit more aggressive and just reboot all the indexers at once. The forwarders will store the data while waiting for the indexers to come back online.

0 Karma

woodcock
Esteemed Legend

The only ones that need special care are the indexers. You should put the cluster into maintenance mode from the Cluster master while doing the reboots because otherwise you will have many, many, extra buckets produced. Doing this stops the Cluster Master from doing replication of buckets until you are done restarting Indexers.

0 Karma

singhrakesh
Explorer

Thanks Wood for you quick response,

Please correct me in below steps if I missed any

Step 1 --> Perform patching on the Cluster Manager

a. Run splunk stop to stop the SPLUNK process
b. Perform the update and restart
c. Post reboot the Cluster Manager will be back online

Step 2 --> Perform patching on the Search Head

a. Run splunk stop to stop the SPLUNK process
b. Perform the update and restart
c. Post reboot the Search head will be back online

Step 3 --> Perform patching on the Indexer peers

a. Run splunk enable maintenance-mode on the CM
b. Run splunk stop on Indexer 1
c. Perform the update and restart
d. Post reboot Indexer 1 will be back online
e. Run splunk stop on Indexer 2
f. Perform the update and restart
g. Post reboot Indexer 2 will be back online
h. Run splunk disable maintenance-mode on the CM
i. Confirm with splunk show maintenance-mode on the CM

How about forwarders, do I need to stop it ?
while cluster master in maintenance-mode, will it ingest logs from other source in indexers

Is any data loss (search head logs /forwarders logs) in indexers while it in maintenance mode ?

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!