Getting Data In

How to troubleshoot heavy-forwarder error "Tcp output pipeline blocked. Attempt '1400' to insert data failed." monitoring syslog files?

Motivator

I am using a Heavy Forwarder to monitor cisco-asa logs.
I have 10 cisco-asa firewalls, writing their logs to 10 different syslog files (using syslog log server Eg: 10.0.0.1.log , 10.0.0.2.log ,etc)
I am monitoring all the files, using inputs.conf (monitor stanza)

Only 3 devices logs are monitored. I am unable to search the other logs from the search head.
I'm seeing the error below on the heavy forwarder -splunkweb (Tcp output pipeline blocked. Attempt '1400' to insert data failed.)

Files are continually open for writing. Files grow to a certain size and then roll to .tgz format. New files are open for writing.

0 Karma

Splunk Employee
Splunk Employee

Check your ulimits on your user and on your box. You might be reaching OS limits.

0 Karma

Motivator

Yes. I found the issue due to IO.

Instead of adding one more indexer to my deployment , can i increase the CPU cores & storage in the existing indexer ?
Will that help to resolve the issue ?

(because , if i add one more indexer, i have to setup distributed search , where in my case indexer & search head is the same server and not too many users for searching.)

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!