Installation

Indexing delays due the monitor input cannot produce data because splunkd's processing queues are full.

btshivanand
Path Finder

We are facing indexing delays we see the below error messages on heavuy forwarders. can some on suggest us

 

01-22-2022 07:32:15.845 +0000 INFO TailReader [9126 tailreader1] - ...continuing.
01-22-2022 07:32:10.845 +0000 WARN TailReader [9126 tailreader1] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:54.057 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:49.056 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:31:44.056 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:39.056 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:30:09.054 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:29:59.053 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:29:49.053 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...

Labels (2)
0 Karma

tscroggins
Influencer

@btshivanand 

I keep the flowcharts published at https://wiki.splunk.com/Community:HowIndexingWorks handy as a reference.

Look at:

index=_internal source=*metrics.log* host=<yourhost> blocked=true

parsingQueue itself could be full, but as @isoutamo noted, the most likely cause is a blocked output queue, which in turn blocks indexQueue, typingQueue, aggQueue, and parsingQueue on the forwarder.

If an output queue is blocked, check the receiver using the same process. If indexQueue is blocked on an indexer, make sure 1) you have enough free disk space for hot data and 2) your storage is fast enough to keep up with your aggregate ingest rate amortized across your indexer or cluster.

If your heavy forwarder has multiple outputs, a single blocked output will eventually block indexQueue, which will in turn block all outputs in the pipeline. This is especially common with syslog outputs, which have a fixed queue length / buffer size.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Check from MC what is the situation on your indexers, are they working ok and are those capable for indexing all incoming events. 
You could also add your HFs as indexers to your MC to see what happening on those. Just create a separate group for those away from the real indexers.

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...