Alerting

The TCP output processor has paused the data flow.

willsy
Communicator

Hello,

i have the following error on my cluster masters (XXXXA13) web gui.

Search peer XXXXP13 has the following message: The TCP output processor has paused the data flow. Forwarding to host_dest= inside output group group1 from host_src=XXXXP13 has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data

On the deployment server XXXXP13 i have the following error message on the web gui. 

The TCP output processor has paused the data flow. Forwarding to host_dest= inside output group group1 from host_src=XXXXXP13 has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data

If i then have a look at splunkd on the deployment server i have the following errors 

IndexerDiscoveryHeartbeatThread - Error in Indexer Discovery communication. Verify that the pass4SymmKey set under [indexer_discovery:group1] in 'outputs.conf' matches the same setting under [indexer_discovery] in 'server.conf' on the Cluster Master. [uri=https://XXXXXA13:8089/services/indexer_discovery http_code=502 http_response="Unauthorized"]

WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to host_dest= inside output group group1 from host_src=XXXXXP13 has been blocked for blocked_seconds=158470. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

Any help is greatly appreciated. This only happened after upgrading to 8.4.1



Labels (1)
0 Karma
1 Solution

willsy
Communicator

So this is actually a really simple solution to one i believed to be alot harder. 

i checked all of my network firewalls, GPO firewalls, host based firewalls and it turned out the data diode was not actually accepting anything on that particular port. 

I would also point out that if you are trying to send data from splunk to a third party i would HIGHLY advise going down the heavy forwarder route. much cleaner simpler and far far less hassle. 

View solution in original post

anil19
Engager

Hi @willsy 

Greetings, I'm new to Splunk.
Even we are having similar error on few HF "Error in Indexer Discovery communication" in an clustered environment. But there other HF in same cluster behaving normally. 
Recently, we updated/renewed SSL certificates on forwarding tier, indexing tier and on search tier. 

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

@anil19 This thread is over a year old with an accepted solution.  Please post a new question describing your problem.  You can refer to this thread and tell how the solution doesn't help in your case.

---
If this reply helps you, Karma would be appreciated.
0 Karma

willsy
Communicator

So this is actually a really simple solution to one i believed to be alot harder. 

i checked all of my network firewalls, GPO firewalls, host based firewalls and it turned out the data diode was not actually accepting anything on that particular port. 

I would also point out that if you are trying to send data from splunk to a third party i would HIGHLY advise going down the heavy forwarder route. much cleaner simpler and far far less hassle. 

richgalloway
SplunkTrust
SplunkTrust
Have you reviewed the system health in the Monitoring Console?
Have you verified the pass4SymmKey values are correct?
---
If this reply helps you, Karma would be appreciated.
0 Karma

Simons20
Loves-to-Learn Lots

What are the correct settings for this? pass4SymmKey values  ?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...