Hi guys,
I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue.
01-17-2025 04:33:12.580 -0600 INFO TailReader [1853894 batchreader0] - Will retry path="/apps2.log" after deferring for 10000ms, initCRC changed after being queued (before=0x47710a7c475501b6, after=0x23c7e0f63f123bf1). File growth rate must be higher than indexing or forwarding rate.
01-17-2025 04:20:24.672 -0600 WARN TailReader [1544431 tailreader0] - Enqueuing a very large file=/apps2.log in the batch reader, with bytes_to_read=292732393, reading of other large files could be delayed
I would greatly appreciate your assistance.
Thank you.
After increasing the pipeline to 4, I have observed some improvements.
Thanks
After increasing the pipeline to 4, I have observed some improvements.
Thanks
It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure. I assume you're pushing it to the Cloud but maybe your network connection can't handle the traffic. Or your Cloud indexers can't handle the amount of data. Or you're reading the files in an inefficient way (for example - from a networked filesystem)...
There can be many reasons.
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful:
https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdel...
It might be worth investigating why the file is so large, whether the data being logged is excessive, and if there are opportunities to optimize its size, rotation frequency, etc. Hope this helps!
@_gkollias It already set it to "0" (unlimited) , Is there anything I should update?
I have this conf in server.conf parallelIngestionPipelines = 2 still getting same issue