We faced similar error.
Problem was with port access which somehow got inaccessible.
Make sure to test below at the very first -
From any source to License Manager
I was able to resolve this issue. I changed the settings on the LM. The SSL certificate used was set incorrectly for the "SSLCARootPath" on server.conf because I had removed a custom app that was setting one common SSLCARootPath. When Splunk restarted, it had used the server.conf in system/local that was set to an old SSL cert.
I don't see "accept" button on this thread and it wasn't my case which my OS admin set restriction on the license master. The connection went well after adding the indexer to the LM server. I don't how they did that.
There are some troubleshooting option, I will just list a few of them.
On the license master:
- check if Splunk is running
- check the file
$SPLUNK_HOME/var/log/splunk/splunkd.log for any SSL related errors
- verify in
web.conf the option
mgmtHostPort = <IP address:port> is NOT set to a port other than 8089
On one of the license slaves
- verify from that you can connect to the license master on port 8089 over https (using curl for example)
Also check if there were any changes in your network just before the error started to happen.
Hope this helps ...
Would you please share details on which .crt or key files that need to be shared among the Splunk instances to get connected to the license master?
I am adding new Indexers to the cluster master but not able to connect to the existing license master. All network connections are just fine, so I suspect the cert/key share.
Thanks all for the tips.
I was able to resolve this issue, which as due to the incorrect SSL certificate used by server.conf for the SSLCARootPath, which was different from the other Splunk hosts: Indexers, SHs, HFs
Occurred because I had removed a custom app that had the correct server.conf for the SSL cert.