Getting Data In

Why am I getting "Failed to add peer to the master... Connection refused" trying to add an indexer to our current indexer cluster?

RecoMark0
Path Finder

Hello,

I am trying to add two more indexers to our current Splunk setup. Our current setup is a search head and two indexers using replication. These are all Linux machines.
I have attempted to add the first one, but it is not working. The error I get in Distributed Search >> Search Peers on the search head is:

Error [00000080] Failed 12 out of 11 times.REST interface to peer is taking longer than 5 seconds to respond on https. Peer may be over subscribed or misconfigured. Check var/log/splunk/splunkd_access.log on the peer

The splunkd_access.log does not have much useful info that I can see.

Another strange thing is the warning on the search head lists an IP value that does not exist for this new indexer, and I have no idea where it got that IP from (server IP should match what is in name. These are just random IP values I replaced the real ones with for sample purposes):

Failed to add peer 'guid=CA7694EA-40EE-4B40-8506-DAFD18BCAB2E server name=ip-99-8-321-101 ip=99.0.4.23:8089' to the master. Error=http client error=Connection refused, while trying to reach https://99.0.4.23:8089/services/cluster/config

Here is how I set up the new indexer:
1. Created the linux Machine and installed 6.4 enterprise on it
2. Copied over the following conf files from a previously running Indexer: server, alert_actions, authentication, authorize, props, transforms, web
3. Copied over server.pem from etc/auth
4. When trying to restart splunk, The Waiting for web server at http://127.0.0.1:8000 to be available.... would never connect
5. Thought it might be because I copied over the server.pem, so I restored the backup.
6. Commented out the sslKeysfilePassword in server.conf
7. Tried restarting again, and splunk came up

However, now I am getting the two errors I mentioned above.

Questions
1. Where is that IP value coming from in the search head error?
2. Are there other conf files I should not have copied over?
3. Did I mess the server up as soon as I copied over the other indexer's server.pem and tried to restart splunk?
4. Do I need to change the pass4SymmKey in server.conf?

Thank you!

0 Karma
1 Solution

jkat54
SplunkTrust
SplunkTrust

Are you specifying this in when connecting to the indexer: https://ip-99-8-321-101:8089 ?

If so, try using nslookup ip-99-8-321-101 on command line to see if dns has wrong IP for that server

I believe you have a DNS issue.

the pass4SymmKey should match on every server in the cluster.

View solution in original post

RecoMark0
Path Finder

Finally solved this! Here is what I did:

  1. Changed the admin password default value on the new Indexer A. Run: splunk edit user admin -password new_value -role admin -auth admin:changeme
  2. I added the register_replication_address and register_search_address to the new indexer server.conf file under [clustering] Note: Use the indexer IP, not the VPN IP value
  3. I reset the pass4Symmkey value for ALL indexers, current and new; and the search head as well. This is done by just putting the same plain text value in each and resetting splunk
  4. I went to the Search Head UI, and went to Distributed Management Console -> Settings -> General Setup and then went to Edit -> Edit Server Roles for the new Indexer. I did not actually change anything, but still hit save and Apply Changes. Apparently this is a known bug!

maraman_splunk
Splunk Employee
Splunk Employee
  • make sure you can resolve your hostname locally (configure hostname and /etc/hosts, you should be able to ping hostname)
  • it would be a good idea to share a hosts file between your servers if you don't have dns
  • change default password as you found out.
  • if you copy server.conf with password, you need to copy splunk.secret before first splunk start or splunk will not be able to read the value or retype the clear passphrase (as it looks like you did)
  • if the ip/host advertize is not the one to be reached by search head (because of nat, vpn, ...), then you can force it (looks in server.conf.spec for register_replication_address,register_forwarder_address & register_search_address settings (notably in aws, you may have to force replication on the public ip)
    • looks for any message in splunkd.log on indexers and search head. (make sure the bundle is correctly installed) Hope that helps

RecoMark0
Path Finder

Thank you for this! I added the register_replication_address and register_search_address to the indexer, but I am now getting "Failed to contact license master: reason='WARN: path=/masterlm/usage: invalid signature on request "

Looks like this is because pass4SymmKey is different. Unfortunately, I do not remember the original clear text value of that key when the original indexers were created.

How can I get ALL servers on the same pass4SymmKey value?

0 Karma

jkat54
SplunkTrust
SplunkTrust

You set the new pass4symmKey in /etc/system/local/server.conf and restart each instance. I converted my comment to answer as I mentioned this in my comment as well.

Also if you are distributing server.conf via application... not recommended but some people do... you'lll have to remove the encrypted pass4SymmKey from /etc/apps/appName/default/server.conf because splunk encrypts the one found in /etc/apps/appName/local/server.conf into the one in the default folder instead.

jkat54
SplunkTrust
SplunkTrust

Are you specifying this in when connecting to the indexer: https://ip-99-8-321-101:8089 ?

If so, try using nslookup ip-99-8-321-101 on command line to see if dns has wrong IP for that server

I believe you have a DNS issue.

the pass4SymmKey should match on every server in the cluster.

RecoMark0
Path Finder

You are right, the nslookup fails for that DNS. I also updated the pass4SymmKey value.

Another issue I noticed was that in the Search Peers list, the new indexer does not have the Cluster label value.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Is it working? Want me to convert my comment to an answer?

RecoMark0
Path Finder

Unfortunately, it is still not working. What is more strange is the other two indexers that work, nslookup on their ip- does not work either, yet they are working in the search head.

This may be an authorization issue, as I am unable to log into the 8089:/services page with this indexer either.

Thank you for your help!

0 Karma

jkat54
SplunkTrust
SplunkTrust

So is it still using the default admin:changeme user/pass? You have to change this password to enable the rest api.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Are you specifying this in when connecting to the indexer: https://ip-99-8-321-101:8089 ?

0 Karma

RecoMark0
Path Finder

A little update
1. It turns out the IP that the search head sees(99.0.4.23:8089) which is incorrect, is actually the VPN IP that the Indexer goes through to connect to the Search head
2. The authentication issue to https://ip-99-8-321-101:8089/services was fixed when I reset the admin password(It was not changeme, so I renamed the passwd to passwd.bak, then changed it from changeme to something I'd know)

It is still not working however.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...