Hi, Splunkers:
I have a question about retention policy that I had configured my index linux_log of frozenTimePeriodInSecs
in stanza default in indexes.conf, then the result after splunk btool as following:
# splunk btool --debug indexes list linux_log | grep frozenTimePeriodInSecs
/opt/splunk/etc/system/local/indexes.conf frozenTimePeriodInSecs = 31624400
but I saw the index info of linux_log in monitoring console like this:
Thats obvious that there are some data is older than retention settings! And I can searching the retention data also:
The timestamp of first event is the time when I started this server... That is, no data was deleted from splunk even they were older than retention limit.
In actually, we had configured the retention limit to 1 year in my customer's splunk but there are 6 years' data, and their disk will be fill full in next 3 days.
There are 3 questions:
1. Why is this?
2. How can I delete the old data? I know that I can identify the timestamp then delete the bucket folder in db path, but the timestamp in bucket name is index_time but not event_timestamp, isn't it?
3. I can't using delete
in a search for the command is only remove the event from search result and will not clean up the disk, right?
Buckets contain a span of data. In order for a bucket to be frozen (deleted), the newest event in the bucket must be older than frozenTimePeriodInSecs
. For this reason, people often constrain buckets to be at most 1-day wide but this can cause very serious side-effects, including scalability problems that are unfixable.
Buckets contain a span of data. In order for a bucket to be frozen (deleted), the newest event in the bucket must be older than frozenTimePeriodInSecs
. For this reason, people often constrain buckets to be at most 1-day wide but this can cause very serious side-effects, including scalability problems that are unfixable.
Ok, thanks for you reply, I'll delete some bucket event we will lost some data.