Getting Data In

What does the queue named AQ? How did it get blocked?

Splunk Employee
Splunk Employee

In the splunkd.log I see this error message:

06-02-2010 09:42:31.344 INFO TailingProcessor - failed to insert into AQ, retrying...


I can see from metrics.log shows that the aq queue is blocked.

06-02-2010 09:42:37.814 INFO Metrics - group=queue, name=aq, blocked=true, max_size=10000,filled_count=7, empty_count=0, current_size=10000, largest_size=10000, smallest_size=9996*


But what the 'aq' queue does? I couldn't find anything in the documentation, how can I unblock it?

Tags (2)
1 Solution

Splunk Employee
Splunk Employee

AQ is the queue feeding the ArchiveProcessor, which is the thread that handles compressed and archived inputs (.gz, .bz2, .Z, .tar, .zip, .tgz). The ArchiveProcessor is single threaded and handles archives one at a time. This means that the file processing code has found more than 10000 archive files that we are processing in turn.

Nothing needs to be done in this case. The ArchiveProcessor will catch up and process this backlog given sufficient time.

View solution in original post

New Member

Will a splunk lightfowarder use the same AQ-queue Archive Processor mechanism like the full-blown splunk server, when it works on a Log Server with a lot .gz-files?

The cause of my question is that we have performance problems during indexing ".gz" files directly on the Splunk Server (after rsyncing them to it). It takes too long.

A workaround could be to use a lightforwarder on the LogServer, but if it works the same way it changes nothing.

The Splunk Server has enough resources: - only one core of eight is used frequently through indexing (Archive Processor) - storage has no or little wait io

0 Karma

Splunk Employee
Splunk Employee

but the basic answer is that the input on the LWF works the same.

0 Karma

Splunk Employee
Splunk Employee

You should post this as a separate question.

0 Karma

Splunk Employee
Splunk Employee

AQ is the queue feeding the ArchiveProcessor, which is the thread that handles compressed and archived inputs (.gz, .bz2, .Z, .tar, .zip, .tgz). The ArchiveProcessor is single threaded and handles archives one at a time. This means that the file processing code has found more than 10000 archive files that we are processing in turn.

Nothing needs to be done in this case. The ArchiveProcessor will catch up and process this backlog given sufficient time.

View solution in original post

Splunk Employee
Splunk Employee

Any tuning to help indexing bandwidth would help here. Setting a time format and prefix would probably be the best help.

0 Karma

Splunk Employee
Splunk Employee

Stephen, what would be some alternatives if a clients AQ is constantly blocked, where too many .tar.gz files are coming in for splunk to process in time.

increasing indexthreads seems to be a no no, adding a full forwarder before the indexer perhaps?

0 Karma