Hello,
Anyone helps out, by seeing the splunkd logs
11-02-2020 16:13:51.870 +1100 WARN CMMasterProxy - Master is down! Make sure pass4SymmKey is matching if master is running.
11-02-2020 16:13:51.870 +1100 ERROR CMSlave - Waiting for master to come up... (retrying every second)
We have two indexers and one search head and one master(deployment server) so one of the indexer 1 is not restarting the splunkd services. Please one this priority, how to up the Splunkd services in indexer 1.
Hi
You said that your cluster master is also a deployment server or did I understood wrongly? If it is, then it's not splunk supported configuration! Newer ever put CM and DS at same instance! If this is the situation then I propose you to create support ticket to Splunk to figure out how you could solve this issue.
r. Ismo
Hello Soutamo,
You are right, we have AWS instances which is 4 for Splunk(Search Head, Deployment Server(which configured CM as well), 2 Indexers)
So in this case, what is the issue and why it was coming, is this unresolvable/resolvable.
Apart from this, I tried to change the pass4Symmkey in CM and Indexers and after restarting its automatically the passSymmkey was changing itself. Currently, passSymmkey is different from CM and Indexers. I think this is the issue the indexers are not restarting and up.
Please suggest me a better solution.
As I said don't use/put CM and DS on same instance, it's not supported and will generate issues for you later or sooner!
If you have changed the pass4SymmKey then this has the reason why those nodes cannot connect together. That must be a same under [clustering] stanza for all nodes which are part of that cluster (also in search heads). When you have added plain text pass4SymmKey to server.conf and then restart it, server will crypt it to something like $7$alsdklakdks..... So you must put that same plain text pass4SymmKey to all indexers, cm and also sh. After you have change it and restart this cluster should work.
I suppose that this should be a good reading for you https://docs.splunk.com/Documentation/Splunk/8.1.0/Indexer/Aboutclusters
r. Ismo
No, i didn't change the key, as I observed here, the CM have a different key and other 3 instances having the same key (SH, ID1 & ID2).
So, I think this is the issue I guess.
If this is the issue, I don't know how it's changed in CM.
Why these Splunkd services are not running in Indexers, I didn't face the issue in SH and DS. This is the first time am facing this issue. How to UP the services apart from older posts.
Hello,
After reboot, the Indexer instance still getting the same issue, so I checked the ERROR logs in DS(CM).
11-03-2020 09:23:23.316 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:23.316 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json
11-03-2020 09:23:28.319 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:28.319 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json
11-03-2020 09:23:33.325 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:33.325 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json
11-03-2020 09:23:38.330 +1100 ERROR DigestProcessor - Failed signature match
11-03-2020 09:23:38.330 +1100 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/cluster/master/info/?output_mode=json