Monitoring Splunk

TailReader error 8.0.2 splunkd's processing queues are full

Donwadyka
Observer

Getting the following error on my Splunk Enterprise Server and I'm not able to get usage.log info.  Can anyone point me in the right direction?

 

Root Cause(s):
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.
Last 50 related messages:
08-05-2020 16:05:22.039 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
08-05-2020 16:05:17.033 -0500 INFO TailReader - ...continuing.
08-05-2020 16:05:12.033 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
08-05-2020 16:05:06.899 -0500 INFO TailReader - File descriptor cache is full (1000), trimming...

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Check settings - monitoring console - indexing - performance- instance / deployment depending your setup. 
r. Ismo

 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...