Security

ERROR TcpOutputFd - Connection to host=c:9997 failed. sock_error = 104. SSL Error = error:00000000:lib(0):func(0):reason(0)

sylim_splunk
Splunk Employee
Splunk Employee

We have configured an intermediary hf, C and 2 HFs - A and B connecting to C.
The HF A is able to establish connection and send data to HF C over ssl but the HF B is not.
We need your help fixing this.

Forwarder A can SSL-communicate with intermediary forwarder C.
Forwarder B can NOT SSL-communicate with intermediary forwarder C.
A and C are in the same chassis/compartment and B is not.

  • Log messages

  • A has no problem:
    02-19-2020 23:38:26.344 +0000 INFO TcpOutputProc - Connected to idx=1.2.3.4:9997, pset=0, reuse=0. using ACK.

  • B gets an error below;
    02-19-2020 23:41:09.861 +0000 ERROR TcpOutputFd - Connection to host=1.2.3.4:9997 failed. sock_error = 104. SSL Error = error:00000000:lib(0):func(0):reason(0)

Tags (2)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

It turned out to be a firewall issue and to get to the point, here's what I did ;
i) Configuration : Check if there's any configuration missing -> All good according to the docs.
ii) Certs : Use the same certs and configuration from A for B so that it can eliminate the mistakes during the cert creation or configs on forwarder B. -> still gets error.
iii) Certs: Use default certs and see if it is still the same -> still the same issue happens. This tells us it could be an issue caused by outside splunk entities.
iv) splunk cmd openssl s_client -connect C:9997 -cert   from A and B to see what happens.
   HF A - OK
HF B still fails.
v) Network entities sitting between B and C could break sessions: Capture tcp packets to get some hints for sock_error=104, to understand why the other end sends tcp reset (104)
   Capture it from both end as network entity in between is suspicious.

 v-1)On B,  tcpdump -i any -ta  -s0 host C and port 9997 -w output.pcap
 v-2)On C, tcpdump -i any -ta  -s0 host B and port 9997 -w output.pcap

  • Findings from the step v) v-1) pcap from HF B shows the intermediary HF C is disconnecting SSL in the middle of handshake. C sends to B a RST which is expected as the splunkd log message says so.

v-2) pcap from HF C says differently - HF B sends a RST to HF C

Judging by v-1) & v-2) outcomes there must be a network entity that breaks the session by sending RSTs to both ends.
This has been fixed with the help of network team by fixing a firewall configuration.

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

It turned out to be a firewall issue and to get to the point, here's what I did ;
i) Configuration : Check if there's any configuration missing -> All good according to the docs.
ii) Certs : Use the same certs and configuration from A for B so that it can eliminate the mistakes during the cert creation or configs on forwarder B. -> still gets error.
iii) Certs: Use default certs and see if it is still the same -> still the same issue happens. This tells us it could be an issue caused by outside splunk entities.
iv) splunk cmd openssl s_client -connect C:9997 -cert   from A and B to see what happens.
   HF A - OK
HF B still fails.
v) Network entities sitting between B and C could break sessions: Capture tcp packets to get some hints for sock_error=104, to understand why the other end sends tcp reset (104)
   Capture it from both end as network entity in between is suspicious.

 v-1)On B,  tcpdump -i any -ta  -s0 host C and port 9997 -w output.pcap
 v-2)On C, tcpdump -i any -ta  -s0 host B and port 9997 -w output.pcap

  • Findings from the step v) v-1) pcap from HF B shows the intermediary HF C is disconnecting SSL in the middle of handshake. C sends to B a RST which is expected as the splunkd log message says so.

v-2) pcap from HF C says differently - HF B sends a RST to HF C

Judging by v-1) & v-2) outcomes there must be a network entity that breaks the session by sending RSTs to both ends.
This has been fixed with the help of network team by fixing a firewall configuration.

Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...