I am troubleshooting an inherited env whose core includes : 3 x SH, 2 x PN, 1 x MN, 1 x DS
I am seeing the following errors on both my PN's :
INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
Doing a search and looking at various posts this is related to comms between a client and a deployment server. But these are indexers.
And looking at the deploymentclient.conf i have the following :
[target-broker:deploymentServer]
targetUri = masternode:8089
[deployment-client]
disabled = false
So my question is .. why would my indexers have a deploymentclient config - pointing at a master node ?
Am i seeing the error because the MN isn't running as a DS ?
How to rectify this ?
[deployment-client]
disabled = true ?
Thanks.
If you're running an indexer cluster, the indexers / slaves / peer nodes should not contact any deployment server at all.
Instead, they recieve their configuration from the cluster master (button in the UI, or apply cluster-bundle on the CLI).
To rectify, removing deploymentclient.conf from the indexers should do.
If you're running an indexer cluster, the indexers / slaves / peer nodes should not contact any deployment server at all.
Instead, they recieve their configuration from the cluster master (button in the UI, or apply cluster-bundle on the CLI).
To rectify, removing deploymentclient.conf from the indexers should do.
Depending on how the app was built, it may be looking at previous indexed events that indicate the deployment client role - usually you can look at what search is running behind that panel and find out for yourself.
Where are you reading that list from?
Its showing in the 'Health Overview app" which we use the roles are specified as :
indexer
cluster_slave
deployment_client
Having looked at the 'instances' section in DMC they are just showing as indexer role
Thanks Martin,
Have made the change - initially i have just renamed the deploymentserver.conf and i also renamed the serverclass.conf also and carried out a rolling restart.
The restart has stopped the errors we were getting so all good there - just one final Q after restart the system still thinks it is configured as a deploymentclient - i thought this might get removed ?
PEER1
indexer
cluster_slave
deployment_client
PEER2
indexer
cluster_slave
deployment_client
Yeah, changing .conf files manually would require a restart - best done as a rolling restart through the master.
Thanks,
I assume the indexers will need restarting to pick up the removal of the deploymentclient.conf ? can i use ./splunk reload ?
yes we have a master deployment server and two slaves as well that point to the master.
Deployment server should not use master node for other servers, it should have separate master server.