Splunk Enterprise

Splunkd services are not able to run UP.

phanichintha
Path Finder

Hello,

Anyone helps out, by seeing the splunkd logs
11-02-2020 16:13:51.870 +1100 WARN  CMMasterProxy - Master is down! Make sure pass4SymmKey is matching if master is running.
11-02-2020 16:13:51.870 +1100 ERROR CMSlave - Waiting for master to come up... (retrying every second)

We have two indexers and one search head and one master(deployment server) so one of the indexer 1 is not restarting the splunkd services. Please one this priority, how to up the Splunkd services in indexer 1.

Labels (1)
Tags (1)
0 Karma

KSV
Loves-to-Learn

hi @phanichintha , is this issue resolved ? if yes, could you post you solution, even I am facing the same issue https://community.splunk.com/t5/Deployment-Architecture/ERROR-CMSlave-Waiting-for-the-cluster-manage...

0 Karma

DanielPi
Moderator
Moderator

Hi @KSV ,

I’m a Community Moderator in the Splunk Community.

This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.

Thank you! 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

You said that your cluster master is also a deployment server or did I understood wrongly? If it is, then it's not splunk supported configuration! Newer ever put CM and DS at same instance! If this is the situation then I propose you to create support ticket to Splunk to figure out how you could solve this issue.

r. Ismo

0 Karma

phanichintha
Path Finder

Hello Soutamo,

You are right, we have AWS instances which is 4 for Splunk(Search Head, Deployment Server(which configured CM as well), 2 Indexers)

So in this case, what is the issue and why it was coming, is this unresolvable/resolvable.

Apart from this, I tried to change the pass4Symmkey in CM and Indexers and after restarting its automatically the passSymmkey was changing itself. Currently, passSymmkey is different from CM and Indexers. I think this is the issue the indexers are not restarting and up.

Please suggest me a better solution.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

As I said don't use/put CM and DS on same instance, it's not supported and will generate issues for you later or sooner!

If you have changed the pass4SymmKey then this has the reason why those nodes cannot connect together. That must be a same under [clustering] stanza for all nodes which are part of that cluster (also in search heads). When you have added plain text pass4SymmKey to server.conf and then restart it, server will crypt it to something like $7$alsdklakdks..... So you must put that same plain text pass4SymmKey to all indexers, cm and also sh. After you have change it and restart this cluster should work.

I suppose that this should be a good reading for you https://docs.splunk.com/Documentation/Splunk/8.1.0/Indexer/Aboutclusters

r. Ismo

0 Karma

phanichintha
Path Finder

No, i didn't change the key,  as I observed here, the CM have a different key and other  3 instances having the same key (SH, ID1 & ID2).

So, I think this is the issue I guess.

If this is the issue, I don't know how it's changed in CM.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
If you don't know what was there original plaintext pass4SymmKey at CM, then create new and update it to all those server's .../etc/system/local/server.conf under [clustering] stanza (and probably also to general if one of these servers are your LM) and then restart those starting from CM; peers and then SH.
0 Karma

phanichintha
Path Finder

Why these Splunkd services are not running in Indexers, I didn't face the issue in SH and DS. This is the first time am facing this issue. How to UP the services apart from older posts.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Have you other clients than SH which you are managing by this DS/CM? Can you just remove easily that DS part from CM?
0 Karma

phanichintha
Path Finder

Hello,

After reboot, the Indexer instance still getting the same issue, so I checked the ERROR logs in DS(CM).

 11-03-2020 09:23:23.316 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:23.316 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json
11-03-2020 09:23:28.319 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:28.319 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json
11-03-2020 09:23:33.325 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:33.325 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json
11-03-2020 09:23:38.330 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:38.330 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Have you already put the same plaintext pass4SymmKey to all your node under [clustering] stanza and also check that under [generic] it is same than what is on your LM?
r. Ismo
0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...