Deployment Architecture

Make the master node return the FQDN of peers instead of IP Address

jessyreg
Explorer

Hello all,

 

So, we ran into (yet) another issue with Splunk...

We have provisioned:

- 1 Cluster Manager/ Deployer

- 2 indexer peer

- 2 search heads

There is another search head (single, not in our cluster), that wants to reach our indexers over Internet.

However, our indexers and cluster manager are behind a NAT. So, we did expose and make according translation to reach our Cluster Manager (the way it should be done).

We also added the option in the server.conf called register_search_address under the [clustering] stanza, and put the FQDN (so the NAT can translate to each of the indexers).

But what is this genious Splunk Manager Node doing instead : send the IP address of course! Internal one, so to make sure the Search Head will never be able to reach any peers. So we get this errors :

ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Status 502 while sending public key to cluster search peer https://10.X.X.X:8089:
ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Send failure while pushing public key to search peer = https://10.X.X.X:8089 , Connect Timeout
ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Unable to establish a connection to peer 10.X.X.X. Send failure while pushing public key to search peer = 10.X.X.X.

For reference, configuration done on each peers  indexer :

[clustering]
manager_uri = https://fqdn-of-managernode:8089
mode = peer
pass4SymmKey = blablabla

register_search_address = https://fqdn-of-current-indexer-node

Shall we file a bug?

Thank you in advance for your answer!

Labels (3)
0 Karma
1 Solution

jessyreg
Explorer

So, at the end it was resolved : Putting this information inside an app, even if Btool shows that the parameter is well read and taken, don't believe.

Instead, I put this inside the system/local/server.conf file of each peers, did a rolling restart, and it worked.

 

 

View solution in original post

0 Karma

jessyreg
Explorer

So, at the end it was resolved : Putting this information inside an app, even if Btool shows that the parameter is well read and taken, don't believe.

Instead, I put this inside the system/local/server.conf file of each peers, did a rolling restart, and it worked.

 

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Good to know. There are some other settings too which are not working on apps, those must put on etc/system/local/xyz.conf files too. I really hope that Splunk will document those separately on some day.
0 Karma

PickleRick
SplunkTrust
SplunkTrust

First things first:

1) After you added this setting did you restart the indexer?

2) Did you check your config with btool?

0 Karma

jessyreg
Explorer

Yes, did that. 

Btool reported the value of registered_search_address as I set it up.

And before that the service was restarted through systemctl

0 Karma

PickleRick
SplunkTrust
SplunkTrust

As I have never used this feature myself I can only have my suspicions and someone else would need to confirm that but I'm afraid it could be working only if you set it before joining the cluster. If you set it afterwards, the CM might remember the old settings. Maybe CM restart could help. Or maybe not.

0 Karma

jessyreg
Explorer

So, I had same suspicion about the CM restart, so already did that, and nothing better.

But to have to setup such feature beforehand makes no sense (although with Splunk nothing would surprise me anymore...), but for me this is clearly a bug. At least in 9.0.5.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...