Having trouble getting the kvstore to indicate that it is ready on any of the three members of the shcluster running Splunk 6.4.0 on CentOS 6.7.
There are 5 existing KV Stores and none of them can be accessed.
The trouble began when an overzealous admin accidentally deleted directories in one of one of our running shcluster members while it was running.
Attempted to use CLI commands to remove the corrupted member from one of the other members which seemed to work.
Then killed the Splunk related zombie processes left due to the pid file and bin directory being deleted the corrupted instance CLI could not be used.
Deleted the corrupted /opt/splunk instance then un-tar-ed another instance of Splunk 6.4.0 into a new /opt/splunk to replace the corrupted instance.
Followed the Splunk docks for "init" and "add new" to the shcluster. Once started issued CLI commands to make sure the new instance was properly configured.
The shcluster status is good and searches are possible from any shcluster member.
Attempt to do a simple search such as: | inputlookup
Yields errors indicating that the KV Store was not properly initialized.
If we had backups of the activity store data from the original three member shcluster a clean restart would make sense i.e. rebuild all the stores from scratch. We have files in folders contained in two of the three members, and can not access them via Splunk to create backup CSV files. We are hoping someone can guide us through getting the activity store initialized.
The mongod.log on the two remaining original shcluster members contain events such as:
Error in heartbeat request to ------------ InvalidReplicaSetConfig Our replica set configuration is invalid or does not include us
We have tried most of the non-destructive suggestions provided in Answers and Google searches.