Getting Data In

How to detect a deleted log

pkeller
Contributor

I think some of my forwarders may be experiencing cases where logs are being removed before all events have been forwarded. Is there a string to look for in splunkd.log or possibly recommendations for increased logging levels to detect when splunkd encounters a situation where a file it has been monitoring no longer exists?

Tags (1)
1 Solution

Yasaswy
Contributor

Hi,
This is an interesting question and it would be very useful to have something in Splunk to able to spot this. However from Splunk's standpoint there is no end of file as the logs will be continued to be written to unless the file gets rotated or deleted. So a missing file is nothing abnormal. Additionally reading log file should be much faster than data being written to them. So I don't think there is currently any available setting in Splunk that would record the incident where a log file has been deleted before fully read ..... as Splunk would treat a missing log file as completely read (and it's rotated or deleted by a batch process).

The info on what data inputs are being monitored would be available from the rest calls (URI Reference) but I don't think it will be of much help in this case. What makes you think that these log files are not fully read? Are they big files being moved to a specific input location and then deleted by a batch process?

View solution in original post

0 Karma

Yasaswy
Contributor

Hi,
This is an interesting question and it would be very useful to have something in Splunk to able to spot this. However from Splunk's standpoint there is no end of file as the logs will be continued to be written to unless the file gets rotated or deleted. So a missing file is nothing abnormal. Additionally reading log file should be much faster than data being written to them. So I don't think there is currently any available setting in Splunk that would record the incident where a log file has been deleted before fully read ..... as Splunk would treat a missing log file as completely read (and it's rotated or deleted by a batch process).

The info on what data inputs are being monitored would be available from the rest calls (URI Reference) but I don't think it will be of much help in this case. What makes you think that these log files are not fully read? Are they big files being moved to a specific input location and then deleted by a batch process?

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...