Installation

Indexing delays due the monitor input cannot produce data because splunkd's processing queues are full.

btshivanand
Path Finder

We are facing indexing delays we see the below error messages on heavuy forwarders. can some on suggest us

 

01-22-2022 07:32:15.845 +0000 INFO TailReader [9126 tailreader1] - ...continuing.
01-22-2022 07:32:10.845 +0000 WARN TailReader [9126 tailreader1] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:54.057 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:49.056 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:31:44.056 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:31:39.056 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:30:09.054 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...
01-22-2022 07:29:59.053 +0000 INFO TailReader [9124 tailreader0] - ...continuing.
01-22-2022 07:29:49.053 +0000 WARN TailReader [9124 tailreader0] - Could not send data to output queue (parsingQueue), retrying...

Labels (2)
0 Karma

tscroggins
Influencer

@btshivanand 

I keep the flowcharts published at https://wiki.splunk.com/Community:HowIndexingWorks handy as a reference.

Look at:

index=_internal source=*metrics.log* host=<yourhost> blocked=true

parsingQueue itself could be full, but as @isoutamo noted, the most likely cause is a blocked output queue, which in turn blocks indexQueue, typingQueue, aggQueue, and parsingQueue on the forwarder.

If an output queue is blocked, check the receiver using the same process. If indexQueue is blocked on an indexer, make sure 1) you have enough free disk space for hot data and 2) your storage is fast enough to keep up with your aggregate ingest rate amortized across your indexer or cluster.

If your heavy forwarder has multiple outputs, a single blocked output will eventually block indexQueue, which will in turn block all outputs in the pipeline. This is especially common with syslog outputs, which have a fixed queue length / buffer size.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Check from MC what is the situation on your indexers, are they working ok and are those capable for indexing all incoming events. 
You could also add your HFs as indexers to your MC to see what happening on those. Just create a separate group for those away from the real indexers.

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...