Getting Data In

Best Practice for Splunk to collect rolling logs

splunkingguy
Explorer

The app write log entries to a log file, say /var/theapp/thelogfile.log.

The app is configured to roll the log file once it reaches a certain size and to keep only x copies, say 3 copies of 10 MB each. So we eventually end up with three 10 MB files like this:

/var/theapp/thelogfile.log

/var/theapp/thelogfile.log.1

/var/theapp/thelogfile.log.2

The log file gets maybe 400-500 entries per minute.

How do I ensure the collector won't miss log entries or duplicate log entries in this scenario?  Or are we always at risk of the collector missing the last few log entries that push the thelogfile.log > 10 MB, with the original writing app rolling the log to thelogfile.log.1 before Splunk read the final entries?

Would making the size of the log files smaller or larger help mitigate the issue?

I assume telling Splunk to watch all 3 copies of the log would lead to duplicate entries in Splunk?

 

Labels (2)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

Configure a monitor for your files, for example /var/theapp/thelogfile.log*

https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectories 

View solution in original post

ITWhisperer
SplunkTrust
SplunkTrust

Do you have any evidence that Splunk is duplicating log events?

Splunk has been dealing with sort of scenario for logs for some time, and if there were any bugs in this area, they would have been fixed long ago!

0 Karma

splunkingguy
Explorer

I do not, mainly because I did not request the logging team to ingest multiple log files that contain copies of the same exact log entries. At best it seems wasteful processing and at worst, duplicate log entries I ASSumed.

Is there official Splunk recommendations on this topic? Do you tell Splunk to watch all rolled copies of the rolled log because Splunk will reliably deduplicate the rolled events? If yes, reference please.

Or is there is a different way to configure this? If its Splunk just watching the main active log and ignoring the rolled copies, how do we know Splunk is fast enough and checks the logs often enough? Is it not a race condition at that point between

A. the writing app rolling the log after the log event that pushes the log over the size limit

B. the log collector checking the log for new entries

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Configure a monitor for your files, for example /var/theapp/thelogfile.log*

https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectories 

splunkingguy
Explorer

Monitoring all the log files including rolled or rotated files does might address my question, namely because following the link you provided lead me here:

How the Splunk platform handles log file rotation - Splunk Documentation

 

So the rolled log will have the same cyclic redundancy check (CRC) as the original file did before it rolled, because the first 256 bytes will be the same?

And then Splunk will notice the rolled log might be bigger than the original copy of the log and so it knows it needs to go to the end of the rolled log to grab any data it missed that got added to the original log a split second before it rolled and before Splunk was able to read it?

When the .1 log gets rolled to .2, their CRCs will be the same AND their file sizes will be the same, so Splunk will not waste anytime with .2?


-Or-
Is the CRC alwasy going to be different between the live log and the first rolled copy, since the first 256 bytes will always be different between the live log and the rolled copy? If yes, won't Splunk read the rolled copy as a net new log file and thus ingest all the log entries, 99.99999% of which it would have already done the first time it saw them in the live log file?

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

The first case - in fact, the forwarder can even keep a file handle open on the log once it is open, so even if it is renamed, the forwarder can still be reading until it reaches the end.

splunkingguy
Explorer

Thank you, this was helpful

0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...