I have 8 indexer instances in an indexer cluster with repFactor = 5. When logged into the index cluster master node's DMC, the Overview Panel shows the number of indexers as 5 on 5 machines (left-most panel of Overview) while the right-most panel indicates there are 8 peers searchable and the Distributed Environment>Indexer Clustering panel lists the 8 deployed indexers. Also, when I execute the show cluster-status command on the master, it also lists the 8 indexers in the cluster.
Can anyone provide some clarity on what is behind these mismatches - being a newbie to Splunk I'm reasonably certain it's just my lack of comprehensive understanding on the inner mechanisms of splunkd versus a deployment/operational problem. But if it is an operational problem, what are the likely configuration root-causes so I can remedy this.
In a different posted question I have also provided a set of troubleshooting information collections for another problem I am having with this first deployment of splunk - "too many streaming errors" resulting in hot bucket replication getting shutdown. Still could use big-time help on that issue but wanted to separate this question from the other for community management purposes.
Is your DMC running on your cluster master, or a separate instance?
Additionally, please confirm the role on all 8 indexers is configured as a "Peer Node" role and and not "Search Head node." In the Cluster master, these roles are advertised based on the configuration of the instance that connects to the cluster master. So if they are reporting in as Search Peers, and not Indexers, most likely you have the peer incorrectly configured.
Via the DMC, if you configure this on your cluster master, it should be aware of the nodes in your environment along with the standard roles. However, you will most likely need to configure the more detailed roles for these, such as license master, deployment server etc.
Is your DMC running on your cluster master, or a separate instance?
Additionally, please confirm the role on all 8 indexers is configured as a "Peer Node" role and and not "Search Head node." In the Cluster master, these roles are advertised based on the configuration of the instance that connects to the cluster master. So if they are reporting in as Search Peers, and not Indexers, most likely you have the peer incorrectly configured.
Via the DMC, if you configure this on your cluster master, it should be aware of the nodes in your environment along with the standard roles. However, you will most likely need to configure the more detailed roles for these, such as license master, deployment server etc.
we have configured DMC in separate instance. while we are trying to check the total indexes in DMC ,it is showing the different result as compare to search head.(like indexes count in search head>DMC).
DMC is running on the cluster master.
All 8 indexers are configured with mode = slave in their respective server.conf files (with the source for that config item in the [clustering] stanza each coming from /opt/splunk/etc/system/local/server.conf. Looking at the Distributed Environment>Indexer Clustering config in each cluster peers Splunk Web also indicates they are peer nodes. And in the cluster master DMC > Setup panel each of the 8 remote instances are flagged as Indexer.
However, I just discovered a problem with these remote instances, the cause of which I cannot determine - all of the remote instances have the same Instance (host) value set as the cluster master. Each of these remote instances does have a correct and unique value for their respective Instance (serverName) and machine name. Not sure at all how the Instance (host) value got set to the value for the cluster master. I can of course easily change this in the Settings > Server Settings > Index Settings panel. Should I do this? (I would think so but want to confirm, especially since I did not make these settings myself but they must have been auto-generated).
I did just now try to change the Settings>General Settings>Server Settings>Index Settings>Default Host Name from the noted value of the cluster master hostname to the instance's hostname and after saving and restarting Splunk from Splunk Web find that it toggles back to the cluster master hostname value.
Try using the Web interface and confirm the "Peer Node" role. Also provide a copy of your server.conf from one of the indexers.
OK. I may have discovered the cause of this issue. Looks like I have the inputs.conf file in the cluster master's bundle. Corrected this and this issue is now resolved.
Thanks for kicking and working me, esix_splunk!