Deployment Architecture

eventType=connect_fail and TcpOutputProc timed out

alexraber
New Member

Hello,

I recently inherited an environment running splunk and am trying to work through some issues -- I'm seeing the following errors looping in metrics.log on a handful of my windows machines:

1-11-2018 16:23:33.860 -0800 INFO  StatusMgr - destHost=splunkurl, destIp=splunkip, destPort=9997, eventType=connect_fail, publisher=tcpout, sourcePort=8089, statusee=TcpOutputProcessor
1-11-2018 16:23:43.860 -0800 INFO  StatusMgr - destHost=splunkurl, destIp=splunkip, destPort=9997, eventType=connect_try, publisher=tcpout, sourcePort=8089, statusee=TcpOutputProcessor

And the following warnings looping in splunkd.log:

1-11-2018 16:24:03.859 -0800 WARN  TcpOutputProc - Cooked connection to ip=splunkip:9997 timed out
1-11-2018 16:24:33.860 -0800 WARN  TcpOutputProc - Cooked connection to ip=splunkip:9997 timed out
1-11-2018 16:25:03.860 -0800 WARN  TcpOutputProc - Cooked connection to ip=splunkip:9997 timed out
1-11-2018 16:25:33.860 -0800 WARN  TcpOutputProc - Cooked connection to ip=splunkip:9997 timed out
1-11-2018 16:25:39.782 -0800 WARN  TcpOutputProc - Forwarding to indexer group default-autolb-group blocked for 1834600 seconds.

From what I've gathered in my searches, this can be caused by my indexer queues being blocked, possibly due to too much data too quickly, slow disks, busy cpus, or incorrect configurations.

I verified the configs against another set of machines which are not experiencing this issue, and I'm able to use splunk to parse the logs on those machines.

In reviewing a sample of 2 similarly functioning nodes in a clusters, where splunk is working on one, and not working on the other, I find that they both have different SSL key passwords and pass4symmKey in output.conf and server.conf. I also noticed that on the node which splunk is working on, the certs have been modified in 2017, whereas the certs on the node which splunk is not working on were last touched in 2016.

I tested by copying the certs and keys over from the working node, setting the credentials as they are on the working node -- after restarting splunk, I continue to see the same log output.

Would anybody have any further insight on this issue, and can someone point me in the right direction for where to look in the documentation to resolve this issue?

Best and thanks

0 Karma

mayurr98
Super Champion
0 Karma

alexraber
New Member

Hello mayurr98, I've confirmed the forwarder is configured with 2 stack_id's in server.conf, and that output.conf is configured to send to the receiver.

The receiver is configured to accept from both stack_id's and other nodes that are also in the stack_id's are showing up in splunk.

This setup is using the universalforwarder.

0 Karma

mayurr98
Super Champion

this setting is to be done on indexer. as the data is received by the indexer. and on forwarder you must have run command ./splunk add forward-server <indexer-ip>:9997 which means that you are telling forwarder to send data to indexer over port 9997.

I hope this helps you!

0 Karma
Get Updates on the Splunk Community!

Exporting Splunk Apps

Join us on Monday, October 21 at 11 am PT | 2 pm ET!With the app export functionality, app developers and ...

Cisco Use Cases, ITSI Best Practices, and More New Articles from Splunk Lantern

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Build Your First SPL2 App!

Watch the recording now!.Do you want to SPL™, too? SPL2, Splunk's next-generation data search and preparation ...