We had a recent instance where logfiles that were starting to be loaded up with null characters and growing exponentially larger wound up requiring a Splunk restart in order to release them.
The UF was obviously not forwarding them due to the binary nature of the file, but the logs show that the UF continued trying. An SA removed the file, but the UF would not let it go.
Is this expected behavior? I've not encountered this in the past, where log rotation or removal resulted in the process keeping the file held.
splunkd 13880 splunkuser 38r REG 9,2 843716702208 51601434 /home/blah/logs/blah/sfdc/blah.blah-6-3-blah.app1.20150604.NNN.gmt.log (deleted)
The splunkd logs only say:
TailingProcessor - File will not be read, is too small to match seekptr checksum ...
So, if the file won't be read, any ideas as to why the UF wouldn't release it?
We just had something similar happen. the UF had a hold on some archived files and the file began to grow exponentially until it filled the disk. I couldn't delete the file and had to stop splunk UF and then delete it. Does anyone know why or where i could look to see why splunk had a hold on the files and make sure this doesn't happen again.