I have tried the rolling restart of the cluster peers and it doesn't solve the problem I'm facing and the manual restart of one of the cluster peers gave the expected result.
Problem I'm facing :
change of ulimits .... ulimit -n 65536, I have used this command and the change isn't happening unless Splunk is restarted. If the cluster is restarted by rolling restart I cannot see the change in the ulimtis
Please help !!
I have used this command and the
change isn't happening unless Splunk
is restarted. If the cluster is
restarted by rolling restart I cannot
see the change in the ulimtis
That would be expected behaviour, for a new ulimit to be respected you would need to create a process from scratch from a newly logged in shell session (or reboot the server).
If Splunk triggers the restart it will have to fork the existing process which has the old ulimit to trigger the restart...(and therefore the restarted process also has the old ulimit if you restart through Splunk)
Once-off you are going to need to restart the indexer cluster peers from the CLI, you could just run splunk offline and then bring that peer back online once done.
Obviously you need to do this during a maintenance window of some kind.
Hi @garethatiag ,
You mean to say bring a peer offline and make changes and reboot the instance ? and bring back online.
Won' t that effect the cluster or is it a safe way to reboot the cluster
It will effect the cluster, the offline command will advise the master that the peer is going offline and therefore make the appropriate arrangements to ensure the cluster remains searchable while the peer is offline.
It will also briefly apply maintenance mode (for a period of time, see the linked documentation for more information).
This will be safer than splunk stop, splunk start on a cluster member.