We are running a distributed Splunk appication on CentOS 7.3. Please find below our unit file:
[Unit]
Description=Splunk Enterprise 6.5.2
After=network.target
Wants=network.target
[Service]
Type=forking
RemainAfterExit=False
User=root
Group=root
LimitNOFILE=65536
ExecStart=/opt/splunk/bin/splunk start --accept-license --answer-yes --no-prompt
ExecStop=/opt/splunk/bin/splunk stop
ExecReload=/opt/splunk/bin/splunk restart
PIDFile=/opt/splunk/var/run/splunk/splunkd.pid
[Install]
WantedBy=multi-user.target
When we start/restart Splunk in our nodes through systemctl for the first time, the service starts and then stops in a short while. Starting splunk after that works correctly. What are we missing here? Should we be making additional changes to the unit file?
Thanks in advance,
Keerthana
I have the same kind of issue when I'm using systemd here.
On my side, it's happens when I'm adding an indexer (or a search head) into a cluster.
When an indexer join a cluster, the cluster master will send a configuration bundle and will ask splunk to restart.
It seems with systemd, splunk stop properly but does not start again after.
You may want to add something like that into the unit file:
Restart=on-failure
RestartSec=30s
But you will be forced to use systemctl to stop splunk (if not, systemctl will start it again after 30s).
I'm still looking for another solution, maybe someone else can help here.
Thanks.
The log shows a normal Splunk shut down