Getting Data In

How to debug a stuck (parsing)queue

chris
Motivator

Hi

I have a forwarder on AIX with vresion 4.3.3 that probably has a problem with its parsingqueue

I see the following in metrics.log:

02-13-2013 16:47:50.219 +0100 INFO  Metrics - group=queue, name=parsingqueue, max_size_kb=512, current_size_kb=449, current_size=9, largest_size=9, smallest_size=8
02-13-2013 16:48:21.226 +0100 INFO  Metrics - group=queue, name=parsingqueue, max_size_kb=512, current_size_kb=449, current_size=9, largest_size=9, smallest_size=9

splunkd.log contains a lot of :

02-13-2013 17:01:37.238 +0100 INFO  TailingProcessor -   ...continuing.
02-13-2013 17:01:42.241 +0100 INFO  TailingProcessor - Could not send data to output queue(parsingQueue), retrying...

Restarting splunk does not change the current_size_kb or current_size values so I tried to increase the queue size following this answer:
http://splunk-base.splunk.com/answers/38218/universal-forwarder-parsingqueue-kb-size

This leads to an increase of max_size_kb and current_size_kb but does not result in the forwarder sending anything to the indexer.

If current_size indicates how many events are in the queue the this number is relatively low.

Is there a way to debug what events are stuck in a queue?
Can I somehow manually force the forwarder to empty the queue and drop the events (I know, that this is ugly)?

Another strange thing is, that once in a while (every cupple of hours) the logs are suddenly indexed, but I did not find any hints in splunkd.log or metrics.log. There is an identical system with the same configuration that works fine. The indexer is not very busy it indexes about 30-40GB a day.

Thanks for your help,

Chris

0 Karma

immortalraghava
Path Finder

Can I somehow manually force the
forwarder to empty the queue and drop
the events (I know, that this is
ugly)?

Did you find an answer for this. Thanks!

0 Karma

greich
Communicator

anyone, purging queue on (intermediate) forwarders stuck at 100%, without reinstalling from scratch?

0 Karma

gjanders
SplunkTrust
SplunkTrust

Restart will by default clear the queues, if you have a specific question it may make sense to open a new Splunk answers post on it as this post is very old

0 Karma

MuS
SplunkTrust
SplunkTrust

Hi Chris, you just got an email 😉

yannK
Splunk Employee
Splunk Employee

If this is a forwarder, the problem is usually a step after :

chris
Motivator

Thanks for replying, the indexer queues (SOS) seem to be ok, the 256KBps is not a problem either the forwarder has a thruput close to 0 for most of the time and then from time to time indexes its data(I don't see why it behaves like this). I see a couple of WARN TcpOutputProc - Raw connection to ip=:9997 timed out in splunkd.log of the forwarder. So it might be the network. I opened a case for the issue.

0 Karma
Get Updates on the Splunk Community!

Don't wait! Accept the Mission Possible: Splunk Adoption Challenge Now and Win ...

Attention everyone! We have exciting news to share! We are recruiting new members for the Mission Possible: ...

Unify Your SecOps with Splunk Mission Control

In today’s post, I'm excited to share some recent Splunk Mission Control innovations. With Splunk Mission ...

Data Preparation Made Easy: SPL2 for Edge Processor

By now, you may have heard the exciting news that Edge Processor, the easy-to-use Splunk data preparation tool ...