Hi @richgalloway , Yes the maxKBps value has been set to zero. Further, a case was opened with support and it was shared with them that we were getting lots of Enqueuing a very large file with bytes_to_read events and they suggested to increase the value to a higher value or larger than the file size as a workaround that would help out with the delays issue. Response by support Splunk by default when forwarding large events/logs, they will stop using the tailreader for data ingestion, and will pass the event/log to the batch reader for forwarding. The batch reader by default has a limit set for 20,971,520 bytes, after which will handle the events and cause enqueuing of events. Workaround To overcome this, We increased the min_batch_size_bytes in limits.conf to a higher value or larger than the file size to have the events being handled by trailing processor instead than batch reader which was causing the stop this resolved the situation. Current contents of limits.conf file [thruput]
maxKBps = 0
[default]
min_batch_size_bytes = 1000000000 Even now we are still getting the Enqueuing a very large file events in splunkd.log because some of the log files on the Syslog Server have a size of 17 GB almost. I'm thinking of increasing the min_batch_size_bytes to maybe 10 GB, have you any suggestions regarding this method?
... View more