Monitoring Splunk

What is causing Splunk Universalforwarders errors?

umesh
Path Finder

Hi Team,

I am getting this error on the universal forwarder.

07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying...

07-10-2023 12:59:18.463 +0200 INFO HealthChangeReporter - feature="TailReader-1" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."

in the UF i configured

[thruput]

maxKBPS=0

CPU usage is below 50%.

but still i am facing the issue

i am getting these figures when using perc95(current_size)=7020604.800000001 in metric logs name=tcpout_SplunkCloud

The errors are in the universal forwarder and the logs from the uf are being pushed to Splunk cloud i.e indexers on Cloud.

@gcusello Please help me on this 

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @umesh,

this message seems that there's a block in the communication between the UF and its destination.

Check (using e.g. telnet) if the route between the UF and its destination is open.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Unlock Database Monitoring with Splunk Observability Cloud

  In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and ...

Purpose in Action: How Splunk Is Helping Power an Inclusive Future for All

At Cisco, purpose isn’t a tagline—it’s a commitment. Cisco’s FY25 Purpose Report outlines how the company is ...

[Upcoming Webinar] Demo Day: Transforming IT Operations with Splunk

Join us for a live Demo Day at the Cisco Store on January 21st 10:00am - 11:00am PST In the fast-paced world ...