I have a license master setup, at site A, all splunk boxes in site A have become slaves to the license master no problem. I have two indexers and a search head in site B that need to be slaves to the license server in site A. When I change the license mode either from the gui the cli to slave and give it the license server URI: https://siteAlicense.local:8089, it bombs out with this error: In handler 'localslave': editTracker failed, reason='Unable to connect to license master, https://siteAlicense.local:8089 Connect to=https://siteAlicense.local:8089 timed out; exceeded 30sec'
Now the fun part is, all my firewall rules from siteA and siteB are correct, if I do a tcpdump I can see the traffic making it to https://siteAlicense.local on port 8089, yet is still craps out.
I just ran into the same problem... I was able to resolve it by connecting to the correct management port. Apparently who ever did the install decided to change the management port from 8089 to 8090. One digit off, causes failure!
that's the direction I have been leaning... but I have good name resolution from either host via IP or by fqdn... they can both resolve... and I kicked off a telnet to port 8089 from the slave to the master and the telnet session just sits at "trying x.x.x.x ... and then it eventually times out... but the freaky part is that I am doing a tcpdump on the master, and as soon as I kick off telnet the tcpdump shows inbound packets from that host on port 8089.... I am a bit dumbfounded.... the traffic is obviously getting there.
Check your pool by clicking edit. Are you using the "Which indexers are eligible to draw from this pool?" "Specific indexes" option?
If so check for currently unauthorized hosts listed in the bottom left "Available indexes" dialog window. Anything with a green plus has contacted the license server but hasn't been allow to utilise it.