I have an issue with one indexer in a clustered environment. It went down due to some server issue and the server was rebooted, but now I'm not able to start splunkd service. I tried to start splunk and check status..
splunkd 6104 was not running.
Stopping splunk helpers...
[ OK ]
Removing stale pid file... done.
The most common cause of this is when splunkd is ran as root, but wasnt supposed to run as root. Then it will take ownership of lots of files, including indexes and .conf files... finally if you start it as the less privileged user, it will fail because it cant "own" the files.
Solving the issue is usually as simple as a recursive chown on the splunk directory, changing the owner to the less privileged user: