Getting Data In

How to troubleshoot heavy-forwarder error "Tcp output pipeline blocked. Attempt '1400' to insert data failed." monitoring syslog files?

splunker12er
Motivator

I am using a Heavy Forwarder to monitor cisco-asa logs.
I have 10 cisco-asa firewalls, writing their logs to 10 different syslog files (using syslog log server Eg: 10.0.0.1.log , 10.0.0.2.log ,etc)
I am monitoring all the files, using inputs.conf (monitor stanza)

Only 3 devices logs are monitored. I am unable to search the other logs from the search head.
I'm seeing the error below on the heavy forwarder -splunkweb (Tcp output pipeline blocked. Attempt '1400' to insert data failed.)

Files are continually open for writing. Files grow to a certain size and then roll to .tgz format. New files are open for writing.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Check your ulimits on your user and on your box. You might be reaching OS limits.

0 Karma

splunker12er
Motivator

Yes. I found the issue due to IO.

Instead of adding one more indexer to my deployment , can i increase the CPU cores & storage in the existing indexer ?
Will that help to resolve the issue ?

(because , if i add one more indexer, i have to setup distributed search , where in my case indexer & search head is the same server and not too many users for searching.)

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...