Deployment Architecture

Failed to contact license master: reason='WARN: path=/masterlm/usage: invalid signature on request from ip=xx.xx.x.xxx'

cirkit1
Explorer

We are having an issue with our Splunk installation not being able to run any searches on our Test environment. We see the following message on the search head:

Search peer xxxindex01xxx has the following message: Failed to contact license master: reason='WARN: path=/masterlm/usage: invalid signature on request from ip=xx.xx.x.xxx' first failure time=1463689647

Our Splunk deployment consists of two environments: Test and Production. Each environment has 1 search head, 1 master server, and 3 indexers. The Production Master Server is also the Master License Server for both environments (Test and Production). All systems/environments are running Splunk 6.3.3. The problem being reported is only on the 3 indexers in our Test environment.

All Splunk nodes on VM's.
We currently have a 2GB license and have been set up on a license pool; 250MB allocated to the Test environment; 1.75 GB allocated to Production. According to the license manager, the Test pool has used up 0% of its license.

The problem also appears to have started after patching the Linux servers with security fixes, this past Sat 5/14.

Could use some guidance on how to diagnose the problem and how to fix it.

Splunk> answers indicates looking at pass4SymmKey. That’s fine, but why would a OS patch affect this?

We ran the licenser command on each affected indexer as this fixed a previous issue, but this did not work this time.

./splunk edit licenser-localslave -master_uri 'https://master:port'

Also, if it were the pass key, I see that one exists in [clustering] stanza, and did not see one in the [general] of one of our Prod indexers. Our Test indexer does seem to have the pass key in both stanza's.

If we were to add a pass key to all [general] stanzas on all nodes, could I just copy the pass key from the license master node server.conf file to all others OR should it be created at each node and entered in plain text and restarted and let splunk encrypt on restart for each node?

0 Karma
1 Solution

cirkit1
Explorer

On production side, all splunk nodes services were stopped.

pass4symmkey in the [general] stanza was updated for all server.conf files on the production side on each node individually and each node service restarted.

each server.conf file was reviewed to ensure an encrypted version appeared in the pass4symmkey value.

production side was validated to ensure all is working.

same was done for the test side.

At first this did not fix the initial issue.

We then went to the production license master and reviewed the test pool created and noted that the indexer names did not match that of the ones on the test side. We edited the test pool, and removed all the indexers with the GUID like name. Also, when looking to add the test indexers we noticed the test indexers (3) did not appear on the list.

We then went to each indexer bin directory on the test side and executed the following CLI command.
./splunk edit licenser-localslave -master_uri 'https://master:port'

Went back to the license master to add the test indexers to the test pool and the indexer names appeared for addition to the test pool.

Once test indexers added to test pool, all worked!!

NOTE TO SELF: Save the pass4symmkey in a password vault for later retrieval. May have saved us some time, but we learned a little more in the process.

View solution in original post

mmacielinski_sp
Splunk Employee
Splunk Employee

I've seen this answer before. It works, but is totally not necessary. Go to the host giving the error. Either delete or comment out the pass4SymmKey in the general section. Restart the host and it will generate a new key. The most likely issue is that the key has become corrupted. I did this on one of our indexers in a cluster having this very issue. It contacted the LM immediately without issue. The reason setting the password in the clear and re-starting works is that it re-hashes the pass4SmmKey. This is what fixes the issue, not the fact that they are the same.

Now the situation is different for Search Head Clusters and Index clusters. Under those situations the pass4Symmkey of the each cluster must be the same for that cluster.

woodams
Explorer

This worked for me. Not sure why it works, but thanks!

I had an indexer that wouldn't connect. Had the pass4SymmKey Saved, re-entered it, rebooted and the indexer still wouldn't connect with License Master. Just commenting it out and rebooting it generated a new one that worked *shrug*

This indexer started having issues after a botched master-app deploy w/ rolling restart. Can't explain why only one indexer got this issue and not the others. Hope this helps someone

0 Karma

cirkit1
Explorer

On production side, all splunk nodes services were stopped.

pass4symmkey in the [general] stanza was updated for all server.conf files on the production side on each node individually and each node service restarted.

each server.conf file was reviewed to ensure an encrypted version appeared in the pass4symmkey value.

production side was validated to ensure all is working.

same was done for the test side.

At first this did not fix the initial issue.

We then went to the production license master and reviewed the test pool created and noted that the indexer names did not match that of the ones on the test side. We edited the test pool, and removed all the indexers with the GUID like name. Also, when looking to add the test indexers we noticed the test indexers (3) did not appear on the list.

We then went to each indexer bin directory on the test side and executed the following CLI command.
./splunk edit licenser-localslave -master_uri 'https://master:port'

Went back to the license master to add the test indexers to the test pool and the indexer names appeared for addition to the test pool.

Once test indexers added to test pool, all worked!!

NOTE TO SELF: Save the pass4symmkey in a password vault for later retrieval. May have saved us some time, but we learned a little more in the process.

Get Updates on the Splunk Community!

New Cloud Intrusion Detection System Add-on for Splunk

In July 2022 Splunk released the Cloud IDS add-on which expanded Splunk capabilities in security and data ...

Happy CX Day to our Community Superheroes!

Happy 10th Birthday CX Day!What is CX Day? It’s a global celebration recognizing innovation and success in the ...

Check out This Month’s Brand new Splunk Lantern Articles

Splunk Lantern is a customer success center providing advice from Splunk experts on valuable data insights, ...