Hi
We are getting the following error message while accessing any application.
Unable to distribute to peer named
Regards
Rajesh
I installed one search header and three indexers for version 6.3.10 and version version 6.4.4.
I set only distributed search.
After the distributed search setting, the service was restarted in the same way.
"ERROR DistributedPeer - Peer: https: //192.168.88.131: 8090 Duplicate Servername"
"ERROR DistributedPeer - Peer: https: //192.168.88.131: 8090 Duplicate Servername" This message occurred only in 6.4.4.
Search query is index=_internal source=splunkd.log log_level=ERROR component=DistributedPeer
*6.3.10 Search Header's Distributed Search Detail**
6.3.10 Search Header's Server.conf
[sslConfig]
sslKeysfilePassword = $1$bY3vFG44n7ZX
[lmpool:auto_generated_pool_download-trial]
description = auto_generated_pool_download-trial
quota = MAX
slaves = *
stack_id = download-trial
[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
quota = MAX
slaves = *
stack_id = forwarder
[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
quota = MAX
slaves = *
stack_id = free
[general]
pass4SymmKey = $1$OsG7SCt+1ORX
serverName = server_1
6.4.4 Search Header's Distributed Search Detail
Below is the distribute search screen after splunk restart.
6.4.4 Search Header's Server.conf
[general]
serverName = server_1
pass4SymmKey = $1$fYMl0+0F6W+P
[sslConfig]
sslKeysfilePassword = $1$Ks9xj6hDoj2P
[kvstore]
port = 8192
[lmpool:auto_generated_pool_download-trial]
description = auto_generated_pool_download-trial
quota = MAX
slaves = *
stack_id = download-trial
[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
quota = MAX
slaves = *
stack_id = forwarder
[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
quota = MAX
slaves = *
stack_id = free
It means that you have a search peer (configured in Manager -> Distributed Search) which has the same servername as the search head you're on. Pretty self explanatory 🙂
The usual culprit for these is when you are running two splunk instances on the same box for dev or testing, in this case Splunk will use the server host as its servername by default, you can override this in server.conf in $SPLUNK_HOME/etc/system/local/
by setting a value called servername as per the docs;
[general]
serverName = <ascii string>
* The name used to identify this Splunk instance for features such as distributed search.
* Defaults to <hostname>-<user running splunk>.
* May not be an empty string
* May contain environment variables
* After any environment variables have been expanded, the server name (if not an IPv6
address) can only contain letters, numbers, underscores, dots, and dashes; and
it must start with a letter, number, or an underscore.
E.g. in server.conf in SPLUNK_HOME/etc/system/local/
[general]
serverName = bobServ
Of course this is just the usual suspect, it could be another instance on another box which happens to have the same servername, perhaps server.conf has been shared and both servers have the same name or something else altogether 🙂
Hi
We have stopped distributed searched option,Now it doesnot shows the "Duplicate Servername" message.
Regards
Rajesh