Installation

Universal Forwarder to remove files

SplunkDash
Motivator

Hello,

Can it be possible for UF to remove/delete files once it's been pushed to the indexer? How would I do that? Thank you...any help will be highly appreciated.

 

 

Labels (1)
Tags (1)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Hi

as far as I know splunk UF is just for collecting data not removing already read events/logs!

To know when you can remove those log files is not a simple task if you want ensure that there is a no situation when you are losing events.

Quite many software/applications already have some mechanism how they handle unneeded log files. You probably could use this with enough long graceperiod after you have read those logs. One option is start to use Splunk SOAR (or other similar tool) where you could define workflow which launch removing/archiving those log files on remote end.

r. Ismo

View solution in original post

wbfoxii
Communicator

I've been using the batch mode to clean up behind the UF for a long time.  Our DBAs create an XML audit file for our databases and I read them with Splunk, then delete them (as long as the account running Splunk has the proper permission).  Example inputs.conf below:

[batch:///var/oracle/*.xml]
disabled = 0
move_policy = sinkhole
sourcetype = mfg:oracle:xml
index = <my-index>

isoutamo
SplunkTrust
SplunkTrust

Hi

as far as I know splunk UF is just for collecting data not removing already read events/logs!

To know when you can remove those log files is not a simple task if you want ensure that there is a no situation when you are losing events.

Quite many software/applications already have some mechanism how they handle unneeded log files. You probably could use this with enough long graceperiod after you have read those logs. One option is start to use Splunk SOAR (or other similar tool) where you could define workflow which launch removing/archiving those log files on remote end.

r. Ismo

SplunkDash
Motivator

Yes, I agree, but if you follow this link...there is some option called "Batch", I am not sure how does it work....but any help will be appreciated. Thank you.

 

inputs.conf - Splunk Documentation

isoutamo
SplunkTrust
SplunkTrust

Batch is for those cases when you want to ingest manually some "special" files as one time and delete it after that. When you want to monitor file continuously then you must use monitor-stanza.

SplunkDash
Motivator

So then ...typically SPLUNK UF doesn't have option to delete those files which have been ingested..... as you said...correct?.... then how would I delete/remove those used files/logs since volume of used logs will be very high over the period of time..... and I need to automate that deletion process. Is it possible, thank you!

0 Karma

teunlaan
Contributor

You can use the batch/sinkhole  config to delete the file after (succesfully) sending the data. (add ACK to be sure the data is indexed).

BUT.... you'll have to be 100% sure the application that is writing to the file (you delete)  doens't crash when the file is suddenly "gone". Some appications don't create a new log file, if the original is removed when the appliaction is still running 

 

Usually the "data provider"  needs to clean up there files. 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

@teunlaanIt's not that they don't create a new file. It's just that typically in unices it works like that:

1) Process A opens/creates a file. It gets a file descriptor for this file.

2) Process A reads/writes to this file.

3) Meanwhile Process B "deletes" the file. This only effectively unlinks the directory entry for the file. As long as there is at least one file using the file (has open descriptor for that file), the file itself is still present on the disk, the system still allows the process(es) holding open descriptors for the file to read and write its contents. It's just not shown in directory contents and cannot be opened by new processes.

4) After last process holding descriptor for the file closes, the file is effectively closed and deleted from the filesystem by kernel.

So the original process holding the file open doesn't even know that the file was "deleted" by another process. There might be additional situations when using file locking but this is the default behaviour.

In case of windows, the file sharing semantics works a bit differently, as far as I remember but might introduce other problems.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

No. It's up to the log provider or some external solution to clean up the log files. Splunk can monitor multiple files, can keep track of which files it has already seen by means of finngerprinting the file so it can detect when the file has been rotated or recreated with an already used name. But it does not remove the files.

0 Karma
Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...