Getting Data In

The TCP output processor has paused the data flow. Forwarding to splunk cloud indexers has been blocked

ThuLe
Explorer

Hello everyone,

We are using a Universal Forwarder (UF) as an intermediate forwarder to send logs from other UFs in our on-premises environment to Splunk Cloud.

Whenever Splunk performs maintenance on our Splunk Cloud instance, the intermediate forwarder and all downstream UFs are unable to connect to Splunk Cloud. As a result, the forwarding queues on the forwarders become full. 

After the maintenance is completed and the Splunk Cloud instance is back online, the queued logs do not resume forwarding automatically. The data remains stuck in the queue until we manually restart the forwarders.

image (4).png

We have tried increasing the [queue] maxSize settings in server.conf on IF, but this did not resolve the issue.

Has anyone else experienced this behavior, or have suggestions on how to handle it properly?

Thank you very much.

 

Labels (1)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @ThuLe ,

I never experienced this behaviour also because I usually us two Hfs as concentrators to Splunk Cloud.

Anyway, at first did you configured 

maxKBps = 0

on your concentrator to avoid queues?

If yes, open a case to Splunk Support.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Splunk MCP & Agentic AI: Machine Data Without Limits

  Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization ...

Finding Based Detections General Availability

Overview  We’ve come a long way, folks, but here in Enterprise Security 8.4 I’m happy to announce Finding ...

Get Your Hands Dirty (and Your Shoes Comfy): The Splunk Experience

Hands-On Learning and Technical Seminars  Sometimes, you just need to see the code. For those looking for a ...