I recently started using the HEC with TLS on my standalone testing instance and now I am seeing some behavior that I cannot make sense of.
I assume that it is related to the fact that I configured both, TCP Input and HEC Input to use different certificates.
The HEC Input is working fine, but when a UF tries to connect to the TCP Input, I get this error:
05-22-2025 07:39:18.469 +0000 ERROR TcpInputProc [2339416 FwdDataReceiverThread] - Error encountered for connection from src=REDACTED:31261. error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.
05-22-2025 07:39:18.555 +0000 ERROR X509Verify [2339416 FwdDataReceiverThread] - Client X509 certificate (CN=REDACTED,CN=A,OU=B,DC=C,DC=D,DC=E) failed validation; error=19, reason="self signed certificate in certificate chain"
05-22-2025 07:39:18.555 +0000 WARN SSLCommon [2339416 FwdDataReceiverThread] - Received fatal SSL3 alert. ssl_state='error', alert_description='unknown CA'.
05-22-2025 07:39:18.555 +0000 ERROR TcpInputProc [2339416 FwdDataReceiverThread] - Error encountered for connection from src=10.253.192.20:32991. error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.
On the UF, I can see the following error message:
05-22-2025 07:39:17.953 +0000 WARN SSLCommon [1074 TcpOutEloop] - Received fatal SSL3 alert. ssl_state='SSLv3 read server session ticket A', alert_description='unknown CA'.
05-22-2025 07:39:17.953 +0000 ERROR TcpOutputFd [1074 TcpOutEloop] - Connection to host=REDACTED:9997 failed
Below are my config files. I appreciate any pointers as to what I did wrong.
Note: All files which are storing certificates are the "usual" order:
Standalone/Indexer:
Server.conf
[sslConfig]
sslRootCAPath = /opt/splunk/etc/auth/mycerts/cert.pem
Inputs.conf
[splunktcp-ssl:9997]
disabled=0
[SSL]
serverCert = /opt/splunk/etc/auth/mycerts/cert0.pem
sslPassword = REDACTED
requireClientCert = true
sslVersions = tls1.2
[http]
disabled = 0
enableSSL = 1
serverCert = /opt/splunk/etc/auth/mycerts/cert1.pem
sslPassword = REDACTED
[http://whatthehec]
disabled = 0
token = REDACTED
UF:
server.conf
[sslConfig]
serverCert = /mnt/certs/cert0.pem
sslPassword = REDACTED
sslRootCAPath = /mnt/certs/cert.pem
sslVersions = tls1.2
outputs.conf:
[tcpout]
defaultGroup = def
forwardedindex.2.whitelist = (_audit|_introspection|_internal)
[tcpout:def]
useACK = true
server = server:9997
autoLBFrequency = 180
forceTimebasedAutoLB = false
autoLBVolume = 5000000
maxQueueSize =100MB
connectionTTL = 300
heartbeatFrequency = 350
writeTimeout = 300
sslVersions = tls1.2
clientCert = /mnt/certs/cert0.pem
sslRootCAPath = /mnt/certs/cert.pem
sslPassword = REDACTED
sslVerifyServerCert = true
Thank you for the replies, both of which have been very helpful in resolving this issue.
Cleaning up the sslRootCAPath settings on the UF is already a good thing by itself.
Investigating the TLS negotiation ultimately lead me to realize that on the indexer, etc/system/local/server.conf did not exist.
In the splunk 9.2.5 docker image, the default.yml file did apparently not get processed by ansible. All the other config files (web.conf, authorize.conf) were also nonexistent.
The fact that there was not rootCACert stored on the indexer explains why the log message states "unknown CA"
Thank you for the replies, both of which have been very helpful in resolving this issue.
Cleaning up the sslRootCAPath settings on the UF is already a good thing by itself.
Investigating the TLS negotiation ultimately lead me to realize that on the indexer, etc/system/local/server.conf did not exist.
In the splunk 9.2.5 docker image, the default.yml file did apparently not get processed by ansible. All the other config files (web.conf, authorize.conf) were also nonexistent.
The fact that there was not rootCACert stored on the indexer explains why the log message states "unknown CA"
Hi @zapping575
Nothing particular is jumping out at me, in your outputs.conf for UF, the sslRootCAPath is deprecated and instead would use 'server.conf/[sslConfig]/sslRootCAPath' but you have set this too.
I would try the following:
Check the certs can be read okay:
openssl x509 -in /path/to/certificate.crt -text -noout
Check which CA's the indexer will accept certs from:
Accepted CAs are displayed under the "Acceptable client certificate CA names" heading.
openssl s_client -connect <indexer_host>:9997 -showcerts -tls1_2
Try and connect to your indexer from UF using the certs with openssl:
openssl s_client \
-connect <indexer_host>:9997 \
-CAfile /mnt/certs/cert.pem \
-cert /mnt/certs/cert0.pem \
-key /mnt/certs/cert0.pem \
-tls1_2 \
-state -showcerts
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing