Hello, I have a Windows machine with an UF installed that logs various logs such as wineventlog. These logs work correctly and are ingested into Splunk, and have for some time.
I wanted to add a new log from a Software that runs on the machine and added it to the the input.conf file. The log is a tracelog for the software and is seen added to monitoring in the _internal index with no errors. The log is ingested correctly initially in batch input, but the UF fails to monitor the file afterwards.
The log is a a fixed size of 50MB and once the log is full it will start overwriting the oldest event in the log, meaning it will start at the top.
I have already tried:
change the initCrcLength
change the ignoreOlderThan
Set NO_BINARY_CHECK = true - this fixed some previous errors where Splunk believed the file to be binary, it's just Ansi encoded.
Sett alwaysOpenFile = true - this did not seem to change anything.
Thanks in advance for any tips, tricks or advice.
As many mentioned on this post, even if I was able to get Splunk to read the log file it will end up with duplicate logs or I might lose events if the UF reads to slow.
The solution is to write a custom script that can handle the log behaviour of when it's "full" it starts overwriting the oldest event. This custom script allows Splunk to ingest events and can help handle the duplicate logs.
As for loss of events by overwriting, I don't have a bullet proof solution beyond ensuring the events are ingested into Splunk faster than they are written. You should consider just using the script to tail the log and write a new log file, to aid in this if necessary.
Many thanks for the insights on UF behaviour for this wierd log.
As many mentioned on this post, even if I was able to get Splunk to read the log file it will end up with duplicate logs or I might lose events if the UF reads to slow.
The solution is to write a custom script that can handle the log behaviour of when it's "full" it starts overwriting the oldest event. This custom script allows Splunk to ingest events and can help handle the duplicate logs.
As for loss of events by overwriting, I don't have a bullet proof solution beyond ensuring the events are ingested into Splunk faster than they are written. You should consider just using the script to tail the log and write a new log file, to aid in this if necessary.
Many thanks for the insights on UF behaviour for this wierd log.
Is it possible for you to configure the app to use standard log rotation (e.g., rename and create a new file when full, or truncate/append)?
If you continue current log rotation, Splunk may miss or duplicate events, and reliable ingestion cannot be guaranteed.
Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @Anders333
I think the main issue here is that it starts overwriting events from the top of the file, I believe this is a pretty unusual approach as you will end up with events in a strange order within the file e.g.
17/Jun/2025 09:08 - Event 5
17/Jun/2025 09:10 - Event 6
17/Jun/2025 09:01 - Event 1
17/Jun/2025 09:03 - Event 2
17/Jun/2025 09:05 - Event 3
17/Jun/2025 09:06 - Event 4
The issue here is even if you can convince Splunk to start reading the events again from the top of the file, it may end up re-ingesting events 1-4.
Is there any way you can reconfigure the output of your app to log differently? e.g. rotate into new log file?
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Unfortunatly, I cannot change the behaviour of how the log is written to the extent required.
I have increased the max file size and reduced the amount of events generated to prevent overwriting within the lifespan before a reset to prevent the scenario you described.
Unfortunatly, this did not result in the UF monitoring the file.
Thanks for the headsup of a potential issue.
Hi @Anders333 ,
which kind of fail are you reporting?
your situation has an internal issue: the log is checked by Splunk every few seconds, but if the rotation overrides the file before reading, you lose the last logs.
then, if the content is always the same (first 256 chars by default) Splunk doesn't read twice the file.
Ciao.
Giuseppe
The internal issue of overriding and potentially losing logs is not a problem for my application, but thanks for the heads up.
Are you saying that Splunk is not able to detect that the application starts writing at the beginning of file and continues to checksum at eof?
Hi @Anders333 ,
No I said, that if you write the same log file, Splunk doesn't read it again.
But, to better understand your issue what's the behavior of your ingestion?
Ciao.
Giuseppe
You are likely correct in that the UF does not read the log file twice, as it reads it initially the first time for a batch ingestion and then never again when monitoring.
The log file does not update its modification time or size as it's never closed by the application. I believe the CRC would change when the oldest events are overwritten, as that occurs at the top of the file. But as pointed out by you and others, this is not desirable behaviour.
So, assuming none of the checks for change in the log file works. Do you have any ideas on how I can make the UF open and read the file, or what mechanisms that prevents it?
As stated in the initial post I have already tried a few things, are there any more tricks?