Deployment Architecture

Failed to Register with Cluster Master

venkateshparank
Path Finder

WARN CMSlave - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=clustermaster.domain.com:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=<IP> mgmtport=8089 (reason: bucket already added as clustered, peer attempted to add again as standalone. guid=F4204358-8FF9-4DD2-A09B-A0B51735559B bid= catch_all~747~F4204358-8FF9-4DD2-A09B-A0B51735559B). [ event=addPeer status=retrying 

Labels (2)
0 Karma

thambisetty
SplunkTrust
SplunkTrust

I faced the exact same issue in one of the multi-site indexer clusters when I upgraded the indexer cluster from version 9.0.x to 9.0.5.

After upgrading Splunk on the indexer, the virtual machine (VM) running the indexer unexpectedly went down. When I restarted the VM, I discovered that the Splunk service was already running, and the version displayed was the latest one. However, I failed to notice that it was experiencing problems connecting to the cluster manager.

I completed the upgrade, but after a few days (around 15 days), the vulnerability management team requested another Splunk version upgrade. When I checked the Splunk version using the command, it displayed version 9.0.5. However, upon inspecting the $SPLUNK_HOME/etc/splunk.version file, I found that it still had the old version, indicating an unsuccessful upgrade.

Realizing this, put the cluster master in maintenance mode, I stopped the Splunk service on the faulty indexer, cleared the standalone buckets using the commands mentioned below. Unfortunately, while restarting the Splunk service on the faulty indexer, the server went down again.

# finding standaralone buckets
find $SPLUNK_DB/ -type d -name "db*" | grep -P "db_\d*_\d*_\d*$"
#converting standardalone buckets to clustered buckets
# 5A0E298B-0AFB-4d56-9dD0-A64dfdfd19DA8 is the GUID of cluster manager(master)
find $SPLUNK_DB/ -type d -name "db_*" | grep -P "db_\d*_\d*_\d*$" |xargs -I {} mv {} {}_5A0E298B-0AFB-4d56-9dD0-A64dfdfd19DA8 


I repeated this process two to three times, but it did not resolve the issue.

Finally, I cleared the $SPLUNK_HOME/etc/instances.cfg file on the faulty indexer and restarted the service. This time, the indexer successfully joined the cluster.

————————————
If this helps, give a like below.

richgalloway
SplunkTrust
SplunkTrust
Thank you for sharing. Do you have a question?
---
If this reply helps you, Karma would be appreciated.
0 Karma

venkateshparank
Path Finder

How to fix that error.

0 Karma

richgalloway
SplunkTrust
SplunkTrust
It's not an error, it's just a warning.
Are you seeing this message often? How often?
Did you do anything on the indexer cluster prior to the message appearing?
---
If this reply helps you, Karma would be appreciated.
0 Karma

venkateshparank
Path Finder

Peer node is not adding to Cluster.

When we check the logs in Indexer, we saw above warn.

0 Karma

richgalloway
SplunkTrust
SplunkTrust
The message indicates the new indexer has a bucket that already exists on another indexer. That shouldn't happen. Do you know the history of the new indexer?
---
If this reply helps you, Karma would be appreciated.
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...

Index This | How many sevens are there between 1 and 100?

August 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...