This I will have to play with, but auto-generating an X amount of bucket (manual trigger of bucket generation per day using a script) and then setting that amount of bucket to move to cold once reached sounds like the solution.
This I will have to play with, but auto-generating an X amount of bucket (manual trigger of bucket generation per day using a script) and then setting that amount of bucket to move to cold once reached sounds like the solution.
Splunk index bucket management is usually not something you want to poke at manually. The hot-to-warm case is somewhat interesting because it comes up for backup purposes, and can be useful to force bucket sizing in time or in space on your own terms instead of with Splunk's pre-packaged logic. The warm-to-cold case is only interesting when dealing with multiple datastores (multiple filesystems).
However, this does become a point of interest when first setting splunk up, in order to validate behavior and operation. There's no easy way to force it, so the general method is to simply constrict the allowed number of warm buckets to force some to reach cold. In indexes.conf (generally set up in etc/system/local/indexes.conf) you can set the maxWarmDBCount on a index-by-index basis.
maxWarmDBCount = <integer>
* The maximum number of warm DB_N_N_N directories.
* All warm DBs are in the <homePath> for the index.
* Warm DBs are kept in open state.
* Defaults to 300.
This means you can temporarily configure your main index (say in the initial setup case) or you could configure a test index to try things with.