Splunk Enterprise

Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has

shugup2923
Path Finder

Hi ,

I have been getting this warning event on one of my  Splunk instance (Role - Deployment Server + License Master)

Architecture is as below- 

Deployement Server > HF1 and HF2 > Indexers

error- 

Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been block for X seconds. This will probably stall the data flow towards indexing and other outputs. Review the receiving system’s health in the Splunk Monitoring Console. It is probably not accepting data.


Can you please help how this can be resolved. 

Labels (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

Usually it means that there are performance issue on indexer side. You can figure it with Monitoring Console. If you have configured it then use it otherwise you need to use some queries or use it in all indexers. I assume that you have installed&configured MC on place (or you will do it).

Open MC -> Indexing -> Indexing Performance: Deployment.

That shows how indexing is working on your environment. It shows which indexer is the bottle neck or is there several ones. After you see which one is busiest then select it and look instance performance which shows in which part of pipeline has stucked. Based on that the fix is little bit different.

This error/warning is ok, if it arise time by time and not too often.

r. Ismo

shugup2923
Path Finder

Just for your knowledge, this instance on which we are seeing error don't send any data to indexers directly, so in that case also can this be  a issue from indexer side?

If not then might be a issue on HF's on which this instance is sending data then these HF's send data to indexers?

Actually this warning is ok but sometime it is increased to more than 600 seconds which becomes a problem sometimes. 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

I suppose that it is sending it’s internal logs to the indexers?

And quite often those issues have seen / noticed on some other servers than where the real issue is. Of course it’s possible that issue is somewhere else, but this is one way to start looking it. 

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...