Some time ago we had a Splunk search head cluster with nodes named “vm-splunk-sh01”, “vm-splunk-sh02”, “vm-splunk-sh03”. Then we added some new nodes, i.e. vm-splunk-nsh01, vm-splunk-nsh02 etc. After successful adding of these new nodes to cluster we stopped old nodes vm-splunk-sh01, vm-splunk-sh02, vm-splunk-sh03 without removing them by command “splunk remove shcluster-member” and re-use these machines by clearing and reinstalling them (it’s my mistake). Now there are following strings in the new nodes’ logs:
2017-06-20T13:06:53.880Z I REPL [ReplicationExecutor] Error in heartbeat request to vm-splunk-sh02:8191; Location18915 Failed attempt to connect to vm-splunk-sh02:8191; couldn't initialize connection to host vm-splunk-sh02, address is invalid
How we can force cluster to (fully) forget old members?
ok I am not sure of this issue, but a similar one I have faced - on my cluster I had a search head, i had to make it as a peer node for a testing. It worked fine, but on master node indexer cluster gui, it was still showing as search head as down.
during my testing I had to restart splunkd on the master node. Once restart done, the old search head down issue was fixed automatically.
I had to restart master too and there is no old peers in master's list, but they are still in logs. Splunk support specialists recommend me this solution: https://answers.splunk.com/answers/513239/remove-reference-to-host-in-mongodb.html. I will try and report here about the result.