Getting Data In

What is the best way to monitor large log files?

gaurav_maniar
Builder

Hi Team,

What is the best way to monitor large rolling log files??

As of now I have following configuration to monitor files, (there are 180+ log files)

 

[monitor:///apps/folders/.../xxx.out]
index=app_server

 

At the end of month, log files are deleted and new log files are created by the application.

But the issue is, the log files are 20Gb+ in size by end of the month.

Recently when we migrated the server, we have started getting following error for some of the log files, 

 

12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
WARN  TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed

 

I tried "crcSalt = <SOURCE>" option as well, there is no. difference.

Please suggest what configuration I should use for monitoring log files in given case.

Thanks.

Labels (3)
0 Karma
Get Updates on the Splunk Community!

A Season of Skills: New Splunk Courses to Light Up Your Learning Journey

There’s something special about this time of year—maybe it’s the glow of the holidays, maybe it’s the ...

Announcing the Migration of the Splunk Add-on for Microsoft Azure Inputs to ...

Announcing the Migration of the Splunk Add-on for Microsoft Azure Inputs to Officially Supported Splunk ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI! Discover how Splunk’s agentic AI ...