Getting Data In

What is the best way to monitor large log files?


Hi Team,

What is the best way to monitor large rolling log files??

As of now I have following configuration to monitor files, (there are 180+ log files)




At the end of month, log files are deleted and new log files are created by the application.

But the issue is, the log files are 20Gb+ in size by end of the month.

Recently when we migrated the server, we have started getting following error for some of the log files, 


12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at for more info.
WARN  TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed


I tried "crcSalt = <SOURCE>" option as well, there is no. difference.

Please suggest what configuration I should use for monitoring log files in given case.


Labels (3)
0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...