Deployment Architecture

Deployment configuration on indexers - DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected

Esky73
Builder

I am troubleshooting an inherited env whose core includes : 3 x SH, 2 x PN, 1 x MN, 1 x DS

I am seeing the following errors on both my PN's :

INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected

Doing a search and looking at various posts this is related to comms between a client and a deployment server. But these are indexers.
And looking at the deploymentclient.conf i have the following :
[target-broker:deploymentServer]
targetUri = masternode:8089
[deployment-client]
disabled = false

So my question is .. why would my indexers have a deploymentclient config - pointing at a master node ?
Am i seeing the error because the MN isn't running as a DS ?
How to rectify this ?

[deployment-client]
disabled = true ?

Thanks.

Tags (1)
0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

If you're running an indexer cluster, the indexers / slaves / peer nodes should not contact any deployment server at all.
Instead, they recieve their configuration from the cluster master (button in the UI, or apply cluster-bundle on the CLI).

To rectify, removing deploymentclient.conf from the indexers should do.

View solution in original post

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

If you're running an indexer cluster, the indexers / slaves / peer nodes should not contact any deployment server at all.
Instead, they recieve their configuration from the cluster master (button in the UI, or apply cluster-bundle on the CLI).

To rectify, removing deploymentclient.conf from the indexers should do.

View solution in original post

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Depending on how the app was built, it may be looking at previous indexed events that indicate the deployment client role - usually you can look at what search is running behind that panel and find out for yourself.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Where are you reading that list from?

0 Karma

Esky73
Builder

Its showing in the 'Health Overview app" which we use the roles are specified as :

indexer
cluster_slave
deployment_client

Having looked at the 'instances' section in DMC they are just showing as indexer role

0 Karma

Esky73
Builder

Thanks Martin,

Have made the change - initially i have just renamed the deploymentserver.conf and i also renamed the serverclass.conf also and carried out a rolling restart.

The restart has stopped the errors we were getting so all good there - just one final Q after restart the system still thinks it is configured as a deploymentclient - i thought this might get removed ?

PEER1

indexer
cluster_slave
deployment_client

PEER2

indexer
cluster_slave
deployment_client

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Yeah, changing .conf files manually would require a restart - best done as a rolling restart through the master.

0 Karma

Esky73
Builder

Thanks,

I assume the indexers will need restarting to pick up the removal of the deploymentclient.conf ? can i use ./splunk reload ?

0 Karma

Esky73
Builder

yes we have a master deployment server and two slaves as well that point to the master.

0 Karma

rajeev_ku
Path Finder

Deployment server should not use master node for other servers, it should have separate master server.

0 Karma
.conf21 Now Fully Virtual!
Register for FREE Today!

We've made .conf21 totally virtual and totally FREE! Our completely online experience will run from 10/19 through 10/20 with some additional events, too!