I recently set up two dedicated search heads in my Splunk environment. After installing Splunk Enterprise, I cut & pasted the contents of my valid enterprise license XML file in the licensing section on each search head. However, after 72 hours, searching is disabled on my search head and when I look in "distributed search" at my search peers, they all show a status of "duplicate license".
Is there a reason why my search peers think that there is a duplicate license issue?
When you have more than one Splunk device you need to set up a license master and point everything to that device. Just about anything can work as your license master, I personally put it on my Deployment Server.
Check out the documentation on this here:
The documentation walks you through doing it through either the Web GUI or the CLI, what isn't obvious from the documentation is if you wanted to do this through a configuration file. You would want to edit your server.conf file with the following entry:
[license] master_uri = https://yourserver.splunk.com:8089
There should already be an auto-license pool for your enterprise key with a * on the slaves so anything that checks in with your license master will go into that default pool. I would reference the documentation above if you need to take this further.
Hope this helps!
The errors I was seeing above were definitely due to those slaves being blocked over port 8089. I followed your suggestion and ran a wget on my slaves. I then worked with our firewall administrator to allow traffic from the slaves back to the license master over port 8089. It now works. I have a setup now with a central license master running on my search head and the other instances of Splunk Enterprise pointing back to the search head license master as license slaves.
I have a feeling that setting up my licensing schema in this recommended way will likely resolve that "duplicate license" issue I was previously seeing. I will keep an eye on it.
Many thanks for your help!
Thanks for the advice! I did attempt to set up one of my search heads as the license master and point the rest of my indexers to it as license slaves. Unfortunately, when I try to point the rest of my indexers that server as slaves, they each display one of the two following error messages:
Bad Request — In handler 'localslave': editTracker failed, reason='Unable to connect to license master: https://:8089 Connect to=https://:8089 timed out; exceeded 30sec'
Splunkd daemon is not responding: ("Error connecting to /services/licenser/localslave/license: ('The read operation timed out',)",)
Does this potentially point to network connectivity issues from slave to master? Permissions issues?
yes, I commonly am seeing that same error briefly when I have to bring down the license master for maintenance. So it would seem that for some reason those slaves cannot connect over 8089. Either your username/password is incorrect or it is an open port issue.
If you are on linux, try running a telnet to your master node over port 8089 and see if it connects. Alternatively you can do a wget/curl via https://masternode:8089 and see what comes back. You can even pass through curl/wget the username/password combination you want to see if that works as well. (This is basically accessing the REST API yourself to test connectivity).
Depending on what message you get back as to what the next step would be to troubleshoot the issue. Since some of them work and some of them don't I am going to guess you either fat-fingered the username/password on it, or more likely, there is some kind of firewall/port block stopping the communication.