We ran out of disk space in the splunk indexer and I cleaned the data using the below methods
source="E:\Application1\logs\*" | delete which deleted all the logs from splunk web console but still the disk space wasn't reduced so I deleted all the index using CLI command './splunk clean eventdata' which reduced the disk space.
After restarting the server I cant see my old logs in the splunk web but the /opt mount point is increasing I dont know what is being indexed as of now as I can't see anything in splunkweb but my input.confs file in the universal forwarder has got E:\Application1\logs\ entry
How to find out what is indexing?
If I need to delete something permanently which I no longer need to be monitored where should I make the change(deleteing the in entry in input.confs is not working)?
How to view the old log files back again in splunkweb?
changing inputs.conf - will stop getting new data
"| delete" in the UI - will make it non-searchable (but not save space)
splunk clean eventdata - will remove all data (and the files/space on disk)
Sounds like this may be what you need Setaretirementandarchivingpolicy, setting splunk to remove older data. The forwarders (if you're using seperate forwarders) keep track of the position in the files they've read. If it's just a couple files, you could do something to change the checksums of the files, like add an empty line to the beginning of it(?), or you could if it's a forwarder "splunk clean all" but don't do this unless you understand all the changes that will make.