Monitoring Splunk

TailReader error 8.0.2 splunkd's processing queues are full

Donwadyka
Observer

Getting the following error on my Splunk Enterprise Server and I'm not able to get usage.log info.  Can anyone point me in the right direction?

 

Root Cause(s):
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.
Last 50 related messages:
08-05-2020 16:05:22.039 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
08-05-2020 16:05:17.033 -0500 INFO TailReader - ...continuing.
08-05-2020 16:05:12.033 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
08-05-2020 16:05:06.899 -0500 INFO TailReader - File descriptor cache is full (1000), trimming...

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Check settings - monitoring console - indexing - performance- instance / deployment depending your setup. 
r. Ismo

 

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...

Updated Data Management and AWS GDI Inventory in Splunk Observability

We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, ...