We had some search heads in a cluster, running on VMs. A couple of the VMs were deleted without properly removing the search heads from the search head cluster first. Now, Splunk is complaining that it can't reach some of the search heads in the cluster. When we try to remove them from the cluster, Splunk won't do it since it gets no reply from the search heads (obviously, since the VMs are deleted).
How to we "force" the search heads out of the search head cluster when the search heads no longer exist?
Can you to transfer the captaincy to different search head using method in below link? That might reset the SHC member list on the captain.
No restart required, so no outage as such.
Hi hettervi, Schedule a brief outage window and shutdown all remaining search heads in the cluster. Once they are all down, start them back up, one by one. I believe this will resolve the issue of the "down" hosts.
Please let me know if this answers your question! 😄
I don't expect it would have any process to run through and remove any history of the SH, but it shouldn't matter. If any issues persist, it would have to involve Splunk support as it would be something fundamental to the way Splunk works, and they'd have to work out enhancement requests to improve it in future versions.
That being said, I'd be really surprised if this had any lasting effect on the cluster once you've restarted everything.
Please accept this answer if it works out for you 😄 (in any case let me know how it works out)
These instructions require the members to have splunk running, and either be available at the command line, or otherwise have the management port available.
For the case in questions, the members no longer exist, and so can't have these commands run against them.