Splunk Enterprise

Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has

shugup2923
Path Finder

Hi ,

I have been getting this warning event on one of my  Splunk instance (Role - Deployment Server + License Master)

Architecture is as below- 

Deployement Server > HF1 and HF2 > Indexers

error- 

Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been block for X seconds. This will probably stall the data flow towards indexing and other outputs. Review the receiving system’s health in the Splunk Monitoring Console. It is probably not accepting data.


Can you please help how this can be resolved. 

Labels (1)
0 Karma

soutamo
SplunkTrust
SplunkTrust

Hi

Usually it means that there are performance issue on indexer side. You can figure it with Monitoring Console. If you have configured it then use it otherwise you need to use some queries or use it in all indexers. I assume that you have installed&configured MC on place (or you will do it).

Open MC -> Indexing -> Indexing Performance: Deployment.

That shows how indexing is working on your environment. It shows which indexer is the bottle neck or is there several ones. After you see which one is busiest then select it and look instance performance which shows in which part of pipeline has stucked. Based on that the fix is little bit different.

This error/warning is ok, if it arise time by time and not too often.

r. Ismo

shugup2923
Path Finder

Just for your knowledge, this instance on which we are seeing error don't send any data to indexers directly, so in that case also can this be  a issue from indexer side?

If not then might be a issue on HF's on which this instance is sending data then these HF's send data to indexers?

Actually this warning is ok but sometime it is increased to more than 600 seconds which becomes a problem sometimes. 

0 Karma

soutamo
SplunkTrust
SplunkTrust

I suppose that it is sending it’s internal logs to the indexers?

And quite often those issues have seen / noticed on some other servers than where the real issue is. Of course it’s possible that issue is somewhere else, but this is one way to start looking it. 

0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!