I am at a client where, by policy, they must restart servers every week. They have an 8-node Search Head Cluster. What is the best method for restarting it?
(Is there a maintenance mode, such as indexer clustering? Do they need to run any command before/after the restart? Should they restart 1, 2, 3, 8 at a time?)
There is no maintenance mode in SHC. The nodes can be restarted in any order you want. It's a question of whether you want to maintain availability during the restart process. If availability is not required, then you can restart them all at once.
Great - thanks. Restarting them all servers at once will not cause unnecessary replication, assuming that some may come back online before others?
I need to know how to restart the servers themselves, not run a rolling restart on the Splunk instances.
Splunk won't trigger a restart of the host OS. Maintenance mode is not required, because the SHC is a bit less paranoid about satisfying replication of the artifacts. We're not talking about data fidelity, we're talking about cached copies of the searches that have been run. If you're talking about replication of knowledge objects, that will always happen across all nodes.
Yes. I'm saying a restart of the host OS is required by policy, and I needed to know the best way to do it for the clustered search heads. It sounds like all at once is sufficient.
This is a link to the docs, but the docs don't address this question. We want to know if rolling restarts perform what I would call a "graceful" restart. For a good description of how a graceful restart should work see this description from Apache https://httpd.apache.org/docs/2.4/stopping.html#graceful
Users would expect a graceful restart to dis allow new searches, but allow currently running searches to finish before restarting.