Hello all, So, we ran into (yet) another issue with Splunk... We have provisioned: - 1 Cluster Manager/ Deployer - 2 indexer peer - 2 search heads There is another search head (single, not in our cluster), that wants to reach our indexers over Internet. However, our indexers and cluster manager are behind a NAT. So, we did expose and make according translation to reach our Cluster Manager (the way it should be done). We also added the option in the server.conf called register_search_address under the [clustering] stanza, and put the FQDN (so the NAT can translate to each of the indexers). But what is this genious Splunk Manager Node doing instead : send the IP address of course! Internal one, so to make sure the Search Head will never be able to reach any peers. So we get this errors : ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Status 502 while sending public key to cluster search peer https://10.X.X.X:8089: ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Send failure while pushing public key to search peer = https://10.X.X.X:8089 , Connect Timeout ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Unable to establish a connection to peer 10.X.X.X. Send failure while pushing public key to search peer = 10.X.X.X. For reference, configuration done on each peers indexer : [clustering] manager_uri = https://fqdn-of-managernode:8089 mode = peer pass4SymmKey = blablabla register_search_address = https://fqdn-of-current-indexer-node Shall we file a bug? Thank you in advance for your answer!
... View more