Monitoring Splunk

What is causing Splunk Universalforwarders errors?

umesh
Path Finder

Hi Team,

I am getting this error on the universal forwarder.

07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying...

07-10-2023 12:59:18.463 +0200 INFO HealthChangeReporter - feature="TailReader-1" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."

in the UF i configured

[thruput]

maxKBPS=0

CPU usage is below 50%.

but still i am facing the issue

i am getting these figures when using perc95(current_size)=7020604.800000001 in metric logs name=tcpout_SplunkCloud

The errors are in the universal forwarder and the logs from the uf are being pushed to Splunk cloud i.e indexers on Cloud.

@gcusello Please help me on this 

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @umesh,

this message seems that there's a block in the communication between the UF and its destination.

Check (using e.g. telnet) if the route between the UF and its destination is open.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...