Monitoring Splunk

What is causing Splunk Universalforwarders errors?

umesh
Path Finder

Hi Team,

I am getting this error on the universal forwarder.

07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying...

07-10-2023 12:59:18.463 +0200 INFO HealthChangeReporter - feature="TailReader-1" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."

in the UF i configured

[thruput]

maxKBPS=0

CPU usage is below 50%.

but still i am facing the issue

i am getting these figures when using perc95(current_size)=7020604.800000001 in metric logs name=tcpout_SplunkCloud

The errors are in the universal forwarder and the logs from the uf are being pushed to Splunk cloud i.e indexers on Cloud.

@gcusello Please help me on this 

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @umesh,

this message seems that there's a block in the communication between the UF and its destination.

Check (using e.g. telnet) if the route between the UF and its destination is open.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Observability | How to Think About Instrumentation Overhead (White Paper)

Novice observability practitioners are often overly obsessed with performance. They might approach ...

Cloud Platform | Get Resiliency in the Cloud Event (Register Now!)

IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises ...

The Great Resilience Quest: 10th Leaderboard Update

The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out >> As our brave ...