I see other questions in the answers site but at this time, i feel mine is unique to the other issues. A rolling message (across search heads).
ServerA (or any of the others in the cluster), has the following message: KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.
ServerA (or any of the others in the cluster), has the following message: Failed to start KV Store process. See mongod.log and splunkd.log for details.
Other solutions that appear to have worked is to change the kv store count to an ODD number and reset it due to a limitation in mongodb. We have the SHC Deployer and 3 search heads, but honestly we're not using the KVStore anyway. Can we just disable the KV store to prevent the message from kicking up all the time?
If we can't just disable it, do we have to add another search head to remove the message?? I can't recommend we remove one...
I had a very similar situation andI realized that some collections were HUGE (in the range of 100 GB), this may cause the mongodb to start very slowly.
I searched in mongodb.log for errors, especially when mongodb starts.
There was not much in there except for some problems while trying to update mongodb to the new version.
I believe that due to its huge size, the service takes too long to starts and goes in conflict with its updates or splunk itself and at the end splunk starts anyway without having the KVstore running.
This is what worked for me, CAREFUL the data will be DELETED from the kvstore, see point  if you want to backup the data, but since you are not using it you can just do the clean:
1) Stop the search head that has the stale KV store member.
2) Run the command splunk clean kvstore --local.
3) Restart the search head.
4) Run the command splunk show kvstore-status to verify.
 If you have important data and you dont want to lose it, do a backup and restore
I hope this helps
i checked all of the servers and found out that the servers in the kvstorestatus via "| rest /services/server/info splunkserver=* | fields splunkserver, kvStoreStatus" do not match.
the names are fine but the cluster master has all 7 listed and the search heads only show 1 search head and the 3 indexers.
It also might be worth noting that the master is the only one w/ the kvstore status == "ready".
Wanting to fix it, not just make the error go away.. should all 7 be listed? Whats the dealo??
I even tried disabling it via the server.conf (mentioned; https://answers.splunk.com/answers/336932/how-to-disable-kvstore-on-a-heavy-forwarder.html) but.. the master still says status changed to failed per no suitable servers found.