but I saw the index info of linux_log in monitoring console like this:
Thats obvious that there are some data is older than retention settings! And I can searching the retention data also:
The timestamp of first event is the time when I started this server... That is, no data was deleted from splunk even they were older than retention limit.
In actually, we had configured the retention limit to 1 year in my customer's splunk but there are 6 years' data, and their disk will be fill full in next 3 days.
There are 3 questions:
1. Why is this?
2. How can I delete the old data? I know that I can identify the timestamp then delete the bucket folder in db path, but the timestamp in bucket name is index_time but not event_timestamp, isn't it?
3. I can't using delete in a search for the command is only remove the event from search result and will not clean up the disk, right?
Buckets contain a span of data. In order for a bucket to be frozen (deleted), the newest event in the bucket must be older than frozenTimePeriodInSecs. For this reason, people often constrain buckets to be at most 1-day wide but this can cause very serious side-effects, including scalability problems that are unfixable.