Getting Data In

File growth rate must be higher than indexing or forwarding rate

Rakzskull
Path Finder

Hi guys,

I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue.

01-17-2025 04:33:12.580 -0600 INFO TailReader [1853894 batchreader0] - Will retry path="/apps2.log" after deferring for 10000ms, initCRC changed after being queued (before=0x47710a7c475501b6, after=0x23c7e0f63f123bf1). File growth rate must be higher than indexing or forwarding rate.


01-17-2025 04:20:24.672 -0600 WARN TailReader [1544431 tailreader0] - Enqueuing a very large file=/apps2.log in the batch reader, with bytes_to_read=292732393, reading of other large files could be delayed


I would greatly appreciate your assistance.

Thank you.

0 Karma
1 Solution

Rakzskull
Path Finder

After increasing the pipeline to 4, I have observed some improvements.

Thanks 

View solution in original post

Rakzskull
Path Finder

After increasing the pipeline to 4, I have observed some improvements.

Thanks 

isoutamo
SplunkTrust
SplunkTrust
Nice, but ensure that you cannot add too many pipelines. Usually it’s less than you have cores/cpus on your box. And this is valid only for forwarders.
0 Karma

PickleRick
SplunkTrust
SplunkTrust

It seems you have big and frequently changing files and your forwarder can't keep up with reading them. That's the problem. But the root cause can be really anywhere, depending on your infrastructure. I assume you're pushing it to the Cloud but maybe your network connection can't handle the traffic. Or your Cloud indexers can't handle the amount of data. Or you're reading the files in an inefficient way (for example - from a networked filesystem)...

There can be many reasons.

0 Karma

_gkollias
Builder

Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful:
https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdel...

It might be worth investigating why the file is so large, whether the data being logged is excessive, and if there are opportunities to optimize its size, rotation frequency, etc. Hope this helps!

0 Karma

Rakzskull
Path Finder

@_gkollias  It already set it to "0" (unlimited) , Is there anything I should update?

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Another thing which may help you is adding parallelIngestionPipelines > 1 in server.conf. This is not helping with individual files, but if there are many files then it could help.
0 Karma

Rakzskull
Path Finder

I have this conf in server.conf parallelIngestionPipelines = 2 still getting same issue 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Here is one conf presentation which could help you https://conf.splunk.com/files/2019/slides/FN1570.pdf?_gl=1*1l6tz7s*_gcl_aw*R0NMLjE3MzA4NDM5NTUuRUFJY.....
But as @PickleRick said there could be many reasons behind that issue.
0 Karma
Get Updates on the Splunk Community!

Preparing your Splunk Environment for OpenSSL3

The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare ...

Deprecation of Splunk Observability Kubernetes “Classic Navigator” UI starting ...

Access to Splunk Observability Kubernetes “Classic Navigator” UI will no longer be available starting January ...

Now Available: Cisco Talos Threat Intelligence Integrations for Splunk Security Cloud ...

At .conf24, we shared that we were in the process of integrating Cisco Talos threat intelligence into Splunk ...