Monitoring Splunk

What is causing Splunk Universalforwarders errors?

umesh
Path Finder

Hi Team,

I am getting this error on the universal forwarder.

07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying...

07-10-2023 12:59:18.463 +0200 INFO HealthChangeReporter - feature="TailReader-1" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."

in the UF i configured

[thruput]

maxKBPS=0

CPU usage is below 50%.

but still i am facing the issue

i am getting these figures when using perc95(current_size)=7020604.800000001 in metric logs name=tcpout_SplunkCloud

The errors are in the universal forwarder and the logs from the uf are being pushed to Splunk cloud i.e indexers on Cloud.

@gcusello Please help me on this 

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @umesh,

this message seems that there's a block in the communication between the UF and its destination.

Check (using e.g. telnet) if the route between the UF and its destination is open.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...