One of my indexes has a couple of old buckets in Warm which are closed for writing in 2014, then the next oldest one is from 2017. When trying to use dbinspect to determine data age per index they are throwing out the accuracy of my report. Each one is less than 0.5 Mb on disk.
How can I get rid of them? Can I manually roll them to cold, then frozen?
Splunk index bucket management is usually not something you want to poke at manually. The hot-to-warm case is somewhat interesting because it comes up for backup purposes, and can be useful to force bucket sizing in time or in space on your own terms instead of with Splunk's pre-packaged logic. The warm-to-cold case is only interesting when dealing with multiple datastores (multiple filesystems).
However, this does become a point of interest when first setting splunk up, in order to validate behavior and operation. There's no easy way to force it, so the general method is to simply constrict the allowed number of warm buckets to force some to reach cold. In indexes.conf (generally set up in etc/system/local/indexes.conf) you can set the maxWarmDBCount on a index-by-index basis.
maxWarmDBCount = <integer> The maximum number of warm DB_N_N_N directories. All warm DBs are in the <homePath> for the index. Warm DBs are kept in open state. Defaults to 300.
This means you can temporarily configure your main index (say in the initial setup case) or you could configure a test index to try things with.
I did see this post in my research. I'm not entirely sure that it would make a difference. The space limit for warm has already been reached and cold is also full so buckets from only 4 months ago are being rolled from cold to frozen, so fiddling with the limits will probably only cause newer bucket to be rotated. If it was going to move these based on age wouldn't it have done so already?