Installation

After upgrading cluster from 5.0.9 to 6.1.3, Socket errors appear when connecting to remote peer.

lrudolph
Path Finder

Hi,

we upgraded our Splunk 5.0.9 Test-Cluster to 6.1.3 today (two indexers, one cluster master, one search head) and followed all instructions from docs.splunk.com. All Splunk instances are running on seperate servers on W2K8 R2 SP1 as OS.

After the upgrade Indexers and Cluster Master have problems connecting to the Search Head using port 8089. Example from an Indexer (x.x.x.x is the IP of the Search Head):

08-07-2014 17:15:22.058 +0000 INFO  NetUtils - Error in connection() 10060
08-07-2014 17:15:22.058 +0000 WARN  HTTPClient - SocketError connecting to=x.x.x.x:8089
08-07-2014 17:15:22.058 +0000 WARN  HTTPClient - Connect to=x.x.x.x:8089 timed out; exceeded 30sec
08-07-2014 17:15:22.058 +0000 ERROR LMTracker - failed to send rows, reason='Unable to connect to remote peer: https://x.x.x.x:8089 rc=2'

Cluster is in sync and searching using the search head works so I could confirm that those errors were not present before the upgrade and only appeared afterwards. Port 8089 is open in the network and allowed locally in the firewall rules. Rebooting servers completely also didn't help.

Did something change in 6.1.3 so that those errors are now present? What can I do to solve them?

Labels (3)
1 Solution

lrudolph
Path Finder

I found the error. The Splunk Management Port on the Search Head was running on 8090 instead of 8089. After changing the value and restarting Splunk, the Indexers could connect to it. It's really strange, I don't remember ever changing this value - maybe this was done automatically during installation...

View solution in original post

lrudolph
Path Finder

I found the error. The Splunk Management Port on the Search Head was running on 8090 instead of 8089. After changing the value and restarting Splunk, the Indexers could connect to it. It's really strange, I don't remember ever changing this value - maybe this was done automatically during installation...

gabetheISguy
Explorer

Thanks lrudolph!!

Had the same issue.. Fixed it 🙂

Get Updates on the Splunk Community!

Holistic Visibility and Effective Alerting Across IT and OT Assets

Instead of effective and unified solutions, they’re left with tool fatigue, disjointed alerts and siloed ...

SOC Modernization: How Automation and Splunk SOAR are Shaping the Next-Gen Security ...

Security automation is no longer a luxury but a necessity. Join us to learn how Splunk ES and SOAR empower ...

Ask It, Fix It: Faster Investigations with AI Assistant in Observability Cloud

  Join us in this Tech Talk and learn about the recently launched AI Assistant in Observability Cloud. With ...