We have updated the splunk servers from 6.5 to 6.6 and are seeing the following errors
Search peer has the following message: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=:8089 rv=0 gotConnectionError=1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx status_line="Error connecting: Connection refused" socket_error="Connection refused" remote_error= [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=9884CA425F0224F22F37BE784337C463 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=4 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=9884CA425F0224F22F37BE784337C463 mgmt_port=8089 name=6A1C1358-0C02-4D60-B58B-EA903E3D0991 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name= site=default splunk_version=6.6.0 splunkd_build_number=1c4f3bbe1aea status=Up } ].
Are there any more steps required for the search peers to register again?
same issue 7.1.1
I had to open a case with Splunk support, and they had to patch. I suggest contacting Splunk if you have this issue. I suspect this has been fixed with newer versions of Splunk.
we are already on latest version 7.0.3
and thanks .. we have opened the case as well with splunk
issue resolved?
Why because, now i am facing same issue in 7.0 version
hi .. any solution ?
Any solution to this issue? We upgraded to v6.6.1 over a month ago but the message just started showing up on all Indexers today.
any solution ?
I am having the same issue and we are on version 6.6.3.2.
this issue appears to be resolved in 6.6.3
I'm also getting
Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=splunkdeployer:8089 rv=0 gotConnectionError=1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx status_line="Error connecting: Connection refused" socket_error="Connection refused" remote_error= [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=4B4EFCBBAA14C9AD694D9BE6FAB82C22 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=4B4EFCBBAA14C9AD694D9BE6FAB82C22 mgmt_port=8089 name=F645968A-05ED-411F-8F5E-C467EED3B184 register_forwarder_address= register_replication_address= register_search_address= replication_port=9887 replication_use_ssl=0 replications= server_name=site1indexer1 site=site1 splunk_version=6.6.0 splunkd_build_number=1c4f3bbe1aea status=Up } ].
nmap shows ports open on all relevant machines,I'm also getting
Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=splunkdeployer:8089 rv=0 gotConnectionError=1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx status_line="Error connecting: Connection refused" socket_error="Connection refused" remote_error= [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=4B4EFCBBAA14C9AD694D9BE6FAB82C22 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=4B4EFCBBAA14C9AD694D9BE6FAB82C22 mgmt_port=8089 name=F645968A-05ED-411F-8F5E-C467EED3B184 register_forwarder_address= register_replication_address= register_search_address= replication_port=9887 replication_use_ssl=0 replications= server_name=site1indexer1 site=site1 splunk_version=6.6.0 splunkd_build_number=1c4f3bbe1aea status=Up } ].
This is on a new install, so assumed it was a configuration issue, however testing using nmap shows the ports are open on relevant boxes.