We're running out of disk space.
How do I delete old log data past a certain time on an index?
If I set a max index size, what happens when that limit is reached for an index?
How should I rotate logs so old logs are automatically deleted?
You can delete data directly from the file system or use the
$SPLUNK_HOME/bin/splunk clean eventdata -index <index_name>
Note that the
delete command will not delete data from the file system, it will only hide it in Splunk web
If you set max index size, then the oldest data that is past that max size will either be deleted or archived if you specified a frozen path when creating your index.
Splunk buckets will roll from hot --> warm --> cold --> frozen.. I believe by default they will roll to frozen every 6 years OR until they reach the max index size
/opt/splunk/var/lib/splunk folder size is 200G of data.
I'm assuming I manage this folder size via the Index size limit?
/var/log/splunk folder size is 90G of data
How should I manage this folder size? Is it safe to delete these *.log files in this folder?
Yes correct, by default the max index size will be 500GB. Go to Settings>Index and find your index and modify the size limits
var/log/splunk is on a separate server which is being forwarded to Splunk? If so then yeah you can delete those log files as it's already been ingested by Splunk (Check before removing! A better strategy would be to zip them or move to another drive if they are important). As for log rotation, that's more of a sys-admin task rather than a Splunk task. You will either need to grow the drive or roll your logs on a regular basis