Getting Data In

How to troubleshoot heavy-forwarder error "Tcp output pipeline blocked. Attempt '1400' to insert data failed." monitoring syslog files?

splunker12er
Motivator

I am using a Heavy Forwarder to monitor cisco-asa logs.
I have 10 cisco-asa firewalls, writing their logs to 10 different syslog files (using syslog log server Eg: 10.0.0.1.log , 10.0.0.2.log ,etc)
I am monitoring all the files, using inputs.conf (monitor stanza)

Only 3 devices logs are monitored. I am unable to search the other logs from the search head.
I'm seeing the error below on the heavy forwarder -splunkweb (Tcp output pipeline blocked. Attempt '1400' to insert data failed.)

Files are continually open for writing. Files grow to a certain size and then roll to .tgz format. New files are open for writing.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Check your ulimits on your user and on your box. You might be reaching OS limits.

0 Karma

splunker12er
Motivator

Yes. I found the issue due to IO.

Instead of adding one more indexer to my deployment , can i increase the CPU cores & storage in the existing indexer ?
Will that help to resolve the issue ?

(because , if i add one more indexer, i have to setup distributed search , where in my case indexer & search head is the same server and not too many users for searching.)

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...