Getting Data In

How to troubleshoot heavy-forwarder error "Tcp output pipeline blocked. Attempt '1400' to insert data failed." monitoring syslog files?

splunker12er
Motivator

I am using a Heavy Forwarder to monitor cisco-asa logs.
I have 10 cisco-asa firewalls, writing their logs to 10 different syslog files (using syslog log server Eg: 10.0.0.1.log , 10.0.0.2.log ,etc)
I am monitoring all the files, using inputs.conf (monitor stanza)

Only 3 devices logs are monitored. I am unable to search the other logs from the search head.
I'm seeing the error below on the heavy forwarder -splunkweb (Tcp output pipeline blocked. Attempt '1400' to insert data failed.)

Files are continually open for writing. Files grow to a certain size and then roll to .tgz format. New files are open for writing.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Check your ulimits on your user and on your box. You might be reaching OS limits.

0 Karma

splunker12er
Motivator

Yes. I found the issue due to IO.

Instead of adding one more indexer to my deployment , can i increase the CPU cores & storage in the existing indexer ?
Will that help to resolve the issue ?

(because , if i add one more indexer, i have to setup distributed search , where in my case indexer & search head is the same server and not too many users for searching.)

0 Karma
Get Updates on the Splunk Community!

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Feel the Splunk Love: Real Stories from Real Customers

Hello Splunk Community,    What’s the best part of hearing how our customers use Splunk? Easy: the positive ...

Data Management Digest – November 2025

  Welcome to the inaugural edition of Data Management Digest! As your trusted partner in data innovation, the ...