Installation

Indexing delays due the monitor input cannot produce data because splunkd's processing queues are full.

btshivanand
Path Finder

We are facing indexing delays we see the below error messages on heavuy forwarders. can some on suggest us

 

01-22-2022 07:32:15.845 +0000 INFO TailReader [9126 tailreader1] - ...continuing.
01-22-2022 07:32:10.845 +0000 WARN TailReader [9126 tailreader1] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:54.057 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:49.056 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:31:44.056 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:39.056 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:30:09.054 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:29:59.053 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:29:49.053 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...

Labels (2)
0 Karma

tscroggins
Influencer

@btshivanand 

I keep the flowcharts published at https://wiki.splunk.com/Community:HowIndexingWorks handy as a reference.

Look at:

index=_internal source=*metrics.log* host=<yourhost> blocked=true

parsingQueue itself could be full, but as @isoutamo noted, the most likely cause is a blocked output queue, which in turn blocks indexQueue, typingQueue, aggQueue, and parsingQueue on the forwarder.

If an output queue is blocked, check the receiver using the same process. If indexQueue is blocked on an indexer, make sure 1) you have enough free disk space for hot data and 2) your storage is fast enough to keep up with your aggregate ingest rate amortized across your indexer or cluster.

If your heavy forwarder has multiple outputs, a single blocked output will eventually block indexQueue, which will in turn block all outputs in the pipeline. This is especially common with syslog outputs, which have a fixed queue length / buffer size.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Check from MC what is the situation on your indexers, are they working ok and are those capable for indexing all incoming events. 
You could also add your HFs as indexers to your MC to see what happening on those. Just create a separate group for those away from the real indexers.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...