Hi,
we upgraded our Splunk 5.0.9 Test-Cluster to 6.1.3 today (two indexers, one cluster master, one search head) and followed all instructions from docs.splunk.com. All Splunk instances are running on seperate servers on W2K8 R2 SP1 as OS.
After the upgrade Indexers and Cluster Master have problems connecting to the Search Head using port 8089. Example from an Indexer (x.x.x.x is the IP of the Search Head):
08-07-2014 17:15:22.058 +0000 INFO NetUtils - Error in connection() 10060
08-07-2014 17:15:22.058 +0000 WARN HTTPClient - SocketError connecting to=x.x.x.x:8089
08-07-2014 17:15:22.058 +0000 WARN HTTPClient - Connect to=x.x.x.x:8089 timed out; exceeded 30sec
08-07-2014 17:15:22.058 +0000 ERROR LMTracker - failed to send rows, reason='Unable to connect to remote peer: https://x.x.x.x:8089 rc=2'
Cluster is in sync and searching using the search head works so I could confirm that those errors were not present before the upgrade and only appeared afterwards. Port 8089 is open in the network and allowed locally in the firewall rules. Rebooting servers completely also didn't help.
Did something change in 6.1.3 so that those errors are now present? What can I do to solve them?
I found the error. The Splunk Management Port on the Search Head was running on 8090 instead of 8089. After changing the value and restarting Splunk, the Indexers could connect to it. It's really strange, I don't remember ever changing this value - maybe this was done automatically during installation...
I found the error. The Splunk Management Port on the Search Head was running on 8090 instead of 8089. After changing the value and restarting Splunk, the Indexers could connect to it. It's really strange, I don't remember ever changing this value - maybe this was done automatically during installation...
Thanks lrudolph!!
Had the same issue.. Fixed it 🙂