Monitoring Splunk

What is causing Splunk Universalforwarders errors?

umesh
Path Finder

Hi Team,

I am getting this error on the universal forwarder.

07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying...

07-10-2023 12:59:18.463 +0200 INFO HealthChangeReporter - feature="TailReader-1" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."

in the UF i configured

[thruput]

maxKBPS=0

CPU usage is below 50%.

but still i am facing the issue

i am getting these figures when using perc95(current_size)=7020604.800000001 in metric logs name=tcpout_SplunkCloud

The errors are in the universal forwarder and the logs from the uf are being pushed to Splunk cloud i.e indexers on Cloud.

@gcusello Please help me on this 

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @umesh,

this message seems that there's a block in the communication between the UF and its destination.

Check (using e.g. telnet) if the route between the UF and its destination is open.

Ciao.

Giuseppe

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Calling All Security Pros: Ready to Race Through Boston?

Hey Splunkers, .conf25 is heading to Boston and we’re kicking things off with something bold, competitive, and ...

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...

Customer success is front and center at .conf25

Hi Splunkers, If you are not able to be at .conf25 in person, you can still learn about all the latest news ...