We have a Linux server which is receiving our syslog traffic and on that machine we have a universal forwarder running on it to read all of the syslog files to send them off to our Splunk indexers.
The syslog server has 300+ different devices which send to it and a few of them get to be very large files. There is a separate file for each device and it rolls over to a new file at midnight.
This is where the issue occurs. The universal forwarder is hittting this error on some of the files:
WARN TailReader - Enqueuing a very large file
And it says that for each of those large files. Some of the files do seem to get read eventually but the data is behind at that point and other of the files are not read.
What can I do on the universal forwarder to avoid these files from being read in batch mode (which is how the ones that do eventually get read work) and instead just tail the files as they go along? And ensure that all of the files are getting picked up?
Thanks.
hey @jeffbat
This is not an error this is a warning. You indexed might get delayed if you are not using TCP/UDP for monitoring as a input.
because the volume is high, or because the historical logs are large to tail the first time.
Try this:
You can edit your own local setting to remove all limits.
by example in $SPLUNK_HOME/etc/apps/SplunkUniversalForwarder/local/limits.conf
or on the lightweight forwarder in $SPLUNK_HOME/etc/apps/SplunkLightForwarder/local/limits.conf
[thruput]
# means unlimited
maxKBps = 0
NOTE: This may lead to increase in license consumption drastically.But you can always change it at any point of time
Would this setting prevent large files from being enqueued? Or will it just change the amount of data the universal forwarder is sending out at a time.
I had changed that setting to 5120 but then it seemed like lots of the other files stopped being picked up.