Monitoring Splunk

TailReader error 8.0.2 splunkd's processing queues are full

Donwadyka
Observer

Getting the following error on my Splunk Enterprise Server and I'm not able to get usage.log info.  Can anyone point me in the right direction?

 

Root Cause(s):
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.
Last 50 related messages:
08-05-2020 16:05:22.039 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
08-05-2020 16:05:17.033 -0500 INFO TailReader - ...continuing.
08-05-2020 16:05:12.033 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
08-05-2020 16:05:06.899 -0500 INFO TailReader - File descriptor cache is full (1000), trimming...

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Check settings - monitoring console - indexing - performance- instance / deployment depending your setup. 
r. Ismo

 

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...