Hello,
I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observations:
11-15-2024 17:27:35.615 -0600 INFO HealthChangeReporter - feature="Real-time Reader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."
host = EAA-DC
index = _internal
source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log
sourcetype = splunkd
limits.conf
[thruput]
maxKBps = 0
server.conf
[queue]
maxSize = 512MB
inputs.conf
[monitor://C:\packetbeat.json]
disabled = false
index = dns
sourcetype = packetbeat
Any direction to resolve this is appreciated! Thank you!
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply async forwarding.
https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat
If this Helps, Please Upvote.