Our indexes seem to be taking up too much disk space so rather than just moving them i'd like to look at the best way to change our approach.
According to this page, http://www.splunk.com/base/Documentation/latest/admin/SetARetirementAndArchivingPolicy data will be automatically frozen after approx 6 years:
To remove data beyond a specified age, set frozenTimePeriodinSecs in indexes.conf to the number of seconds to elapse before the data gets erased. The default value is 188697600 seconds, or approximately 6 years.
However, if I go on to the Manager and look at my indexer, it is showing information from the os index dating back to 8 Aug 2003 12:29:17 as the Earliest Event, which is beyond 6 years. How can I check if this property is working as it doesn't seem to be to me?
Splunk will delete based upon the latest event contained in a bucket. For this reason, you may not see exact timing for deletion of data if there is a large overlap in timing. To inspect the span of a db/bucket, you can use the dbinspect command to see the time range of each bucket.
It's possible that you have data in a bucket that is before and later than the deletion time. Since we delete based on the latest (most recent) event in a bucket, you may still see older events in an index.
Thanks I got it. I can see buckets that have earliest time as 2003 but latest time as 2010. So is there no easy way to clear out my old data or make this command work?
Splunk won't age out buckets to frozen until all events in that bucket are older than
In the search app, take a look at Status->Index Activity->Index Health.
Also take a look at the
| dbinspect search command.