The app write log entries to a log file, say /var/theapp/thelogfile.log.
The app is configured to roll the log file once it reaches a certain size and to keep only x copies, say 3 copies of 10 MB each. So we eventually end up with three 10 MB files like this:
/var/theapp/thelogfile.log
/var/theapp/thelogfile.log.1
/var/theapp/thelogfile.log.2
The log file gets maybe 400-500 entries per minute.
How do I ensure the collector won't miss log entries or duplicate log entries in this scenario? Or are we always at risk of the collector missing the last few log entries that push the thelogfile.log > 10 MB, with the original writing app rolling the log to thelogfile.log.1 before Splunk read the final entries?
Would making the size of the log files smaller or larger help mitigate the issue?
I assume telling Splunk to watch all 3 copies of the log would lead to duplicate entries in Splunk?
Configure a monitor for your files, for example /var/theapp/thelogfile.log*
https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectories
Do you have any evidence that Splunk is duplicating log events?
Splunk has been dealing with sort of scenario for logs for some time, and if there were any bugs in this area, they would have been fixed long ago!
I do not, mainly because I did not request the logging team to ingest multiple log files that contain copies of the same exact log entries. At best it seems wasteful processing and at worst, duplicate log entries I ASSumed.
Is there official Splunk recommendations on this topic? Do you tell Splunk to watch all rolled copies of the rolled log because Splunk will reliably deduplicate the rolled events? If yes, reference please.
Or is there is a different way to configure this? If its Splunk just watching the main active log and ignoring the rolled copies, how do we know Splunk is fast enough and checks the logs often enough? Is it not a race condition at that point between
A. the writing app rolling the log after the log event that pushes the log over the size limit
B. the log collector checking the log for new entries
Configure a monitor for your files, for example /var/theapp/thelogfile.log*
https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectories
Monitoring all the log files including rolled or rotated files does might address my question, namely because following the link you provided lead me here:
How the Splunk platform handles log file rotation - Splunk Documentation
So the rolled log will have the same cyclic redundancy check (CRC) as the original file did before it rolled, because the first 256 bytes will be the same?
And then Splunk will notice the rolled log might be bigger than the original copy of the log and so it knows it needs to go to the end of the rolled log to grab any data it missed that got added to the original log a split second before it rolled and before Splunk was able to read it?
When the .1 log gets rolled to .2, their CRCs will be the same AND their file sizes will be the same, so Splunk will not waste anytime with .2?
-Or-
Is the CRC alwasy going to be different between the live log and the first rolled copy, since the first 256 bytes will always be different between the live log and the rolled copy? If yes, won't Splunk read the rolled copy as a net new log file and thus ingest all the log entries, 99.99999% of which it would have already done the first time it saw them in the live log file?
The first case - in fact, the forwarder can even keep a file handle open on the log once it is open, so even if it is renamed, the forwarder can still be reading until it reaches the end.
Thank you, this was helpful