Getting Data In

TailingProcessor - Could not send data to output queue (parsingQueue), retrying...

tacleal
Engager

I have not been able to find a solution although there are questions with the same/similar symptom.
My log files are not being forwarded. I am getting this message in my splunkd.log file

INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying...

It does not look like a blocking queue since all of their current sizes are 0 and there is no blocking=true.

From the metrics.log file

09-28-2011 13:04:51.939 -0600 INFO Metrics - group=pipeline, name=parsing, processor=readerin, cpu_seconds=0.000000, executes=3, cumulative_hits=2145 
09-28-2011 13:04:51.939 -0600 INFO Metrics - group=pipeline, name=parsing, processor=send-out-light-forwarder, cpu_seconds=0.000000, executes=3, cumulative_hits=2145 
09-28-2011 13:04:51.939 -0600 INFO Metrics - group=pipeline, name=parsing, processor=tcp-output-light-forwarder, cpu_seconds=0.000000, executes=3, cumulative_hits=2145 
09-28-2011 13:04:51.939 -0600 INFO Metrics - group=pipeline, name=parsing, processor=thruput, cpu_seconds=0.000000, executes=3, cumulative_hits=2145 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=pipeline, name=parsing, processor=utf8, cpu_seconds=0.000000, executes=3, cumulative_hits=2145 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=thruput, name=index_thruput, instantaneous_kbps=0.000000, instantaneous_eps=0.000000, average_kbps=0.000000, total_k_processed=0, kb=0.000000, ev=0, load_average=0.000000 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=0.130292, instantaneous_eps=0.000000, average_kbps=114.253297, total_k_processed=127486, kb=4.039062, ev=3, load_average=0.000000 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=map, name=pipelineinputchannel, current_size=20, inactive_channels=1, new_channels=0, removed_channels=0, reclaimed_channels=0, timedout_channels=0, abandoned_channels=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=per_host_thruput, series="soasec-qual.domain.com", kbps=0.130292, eps=0.096774, kb=4.039062, ev=3, avg_age=0.333333, max_age=1 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=per_index_thruput, series="_internal", kbps=0.130292, eps=0.096774, kb=4.039062, ev=3, avg_age=0.333333, max_age=1 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=per_source_thruput, series="/usr/local/mwradmin/splunkforwarder/var/log/splunk/metrics.log", kbps=0.130292, eps=0.096774, kb=4.039062, ev=3, avg_age=0.333333, max_age=1 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=per_sourcetype_thruput, series="splunkd", kbps=0.130292, eps=0.096774, kb=4.039062, ev=3, avg_age=0.333333, max_age=1 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=tcpout_mwr003lx.domain.com_8090, max_size=512000, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=aeq, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=aq, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=auditqueue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=fschangemanager_queue, max_size_kb=5120, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=indexqueue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=nullqueue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=parsingqueue, max_size_kb=6144, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=queue, name=tcpin_queue, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=0, smallest_size=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=realtime_search_data, system total, drop_count=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=search_concurrency, system total, active_hist_searches=0, active_realtime_searches=0 
09-28-2011 13:04:51.940 -0600 INFO Metrics - group=tcpout_connections, mwr003lx.domain.com_8090:134.253.181.54:8090:0, sourcePort=8089, destIp=134.253.181.54, destPort=8090, _tcp_Bps=13.93, _tcp_KBps=0.01, _tcp_avg_thruput=0.05, _tcp_Kprocessed=55, _tcp_eps=0.03

The forwarder is connecting to our splunk server, but none of logs are being forwarded.

Any help or suggestions would greatly help.

-T

Tags (3)

Lowell
Super Champion

Just FYI, the search for blocking=true should be blocked=true.

0 Karma

jgr_26
Engager

check app.conf whether it looks good

0 Karma

bmignosa_splunk
Splunk Employee
Splunk Employee

I have seen this error caused by a mis-configured outputs.conf/inputs.conf.
Ensure the listening port on the indexer is a configured receiver for spunktcp and not the port of splunkd.
Below is a simple example.

Indexer:
inputs.conf

[splunktcp://6200]

Forwarder:
outputs.conf

[tcpout]

defaultGroup = Test

[tcpout:Test]

server = 10.10.10.10:6200

Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...