I have a Splunk 7.1.2 cluster, using Search Head Cluster with AWS Load Balancer. It works fine. The server.conf says
httpport = 443
enableSplunkWebSSL = true
privKeyPath = /path/to/mycert.key
caCertPath = /path/to/mycert.pem
Now I'm deploying a brand new cluster with the 7.2.3 version, with the same server.conf, but the load balancer doesn't recognize the instances as Healthy. In the splunkd.log, for every check from the load balancer, which is a get on https://splunkhostIP/en-US/account/login?return_to=%2Fen-US%2F, I receive these two messages when It happens.
01-30-2019 21:27:18.107 +0000 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='handshake failure'.
01-30-2019 21:27:18.107 +0000 WARN HttpListener - Socket error from 172.16.77.204:3955 while idling: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
The IP in the message is a Loadbalancer internal IP, calling the instance for healthcheck.
The old search head cluster instances don't show me these same warning messages.
The old cluster has the exact same scenario and except for the Splunk version.
The certificate file is the same for both, and they work exactly alike, calling on the browser with a name, for the Certificate is a Digicert Signed.
And calling using IP, they complain about the certificate, but when I accept to see even "unsafe", they have the same behavior.
I saw some issues with the same Warning messages, but the issues are not like mine.
I really appreciate any help.