We have a distributed Splunk system installed and use deployment server to manage configurations. We have a python script which updates a few lookup CSV files and binary database files periodically. In the script, we run the
reload deploy-server command to distribute the changed files across all the systems. Though the change is only lookup files, it is causing restart of splunk service at all the nodes. Is there anyway we can prevent the this restart? We have saved summary searches running and it is causing missing buckets of data.
Thanks in advance,
probably in your ServerClass you set for at least one of the Apps containing these lookups to restart Splunk, so when you launch "reload deploy-server" remote Splunks are restarted!
We are setting restartSplunkd to true. So if I remove the configuration, then what will happen when I make other configuration changes that may require splunk restart?
yes there's a problem!
I suggest to change the approach for lookups, two choices:
As per your post, it seems you are using "Deployment-server" to manage Search Head Cluster? if this is the case it is wrong. You should use "deployer" for the same.
Lookups normally don't tend to restart Splunk endpoints until you have forced the serverclass element of the server for restartSplunkd=true. If you want, you can make it
restartSplunkd=false forcibly and have a go
I am not using a search head cluster. I just have two search heads which serve two different purposes. If I set restartSplunkd to false, then what will happen if I make any configuration change which might require splunk restart?
you can put restartSplunkd to "false" for each app you push. So you can make it granular and package all your lookup into an app which you can say "false" for