Deployment Architecture

Index retains old warm buckets

JeremyHagan
Communicator

One of my indexes has a couple of old buckets in Warm which are closed for writing in 2014, then the next oldest one is from 2017. When trying to use dbinspect to determine data age per index they are throwing out the accuracy of my report. Each one is less than 0.5 Mb on disk.

How can I get rid of them? Can I manually roll them to cold, then frozen?

inventsekar
SplunkTrust
SplunkTrust

from a similar post -
https://answers.splunk.com/answers/664/how-can-i-trigger-migration-of-buckets-from-warm-to-cold.html

Splunk index bucket management is usually not something you want to poke at manually. The hot-to-warm case is somewhat interesting because it comes up for backup purposes, and can be useful to force bucket sizing in time or in space on your own terms instead of with Splunk's pre-packaged logic. The warm-to-cold case is only interesting when dealing with multiple datastores (multiple filesystems).

However, this does become a point of interest when first setting splunk up, in order to validate behavior and operation. There's no easy way to force it, so the general method is to simply constrict the allowed number of warm buckets to force some to reach cold. In indexes.conf (generally set up in etc/system/local/indexes.conf) you can set the maxWarmDBCount on a index-by-index basis.

maxWarmDBCount = <integer>
 The maximum number of warm DB_N_N_N directories.
 All warm DBs are in the <homePath> for the index. 
 Warm DBs are kept in open state.
 Defaults to 300.

This means you can temporarily configure your main index (say in the initial setup case) or you could configure a test index to try things with.

thanks and best regards,
Sekar

PS - If this or any post helped you in any way, pls consider upvoting, thanks for reading !
0 Karma

JeremyHagan
Communicator

I did see this post in my research. I'm not entirely sure that it would make a difference. The space limit for warm has already been reached and cold is also full so buckets from only 4 months ago are being rolled from cold to frozen, so fiddling with the limits will probably only cause newer bucket to be rotated. If it was going to move these based on age wouldn't it have done so already?

0 Karma

JeremyHagan
Communicator

Would it be based on endEpoch rather than modTime? I notice that these two buckets have higher endEpoch values than others.

0 Karma
Get Updates on the Splunk Community!

AppDynamics Summer Webinars

This summer, our mighty AppDynamics team is cooking up some delicious content on YouTube Live to satiate your ...

SOCin’ it to you at Splunk University

Splunk University is expanding its instructor-led learning portfolio with dedicated Security tracks at .conf25 ...

Credit Card Data Protection & PCI Compliance with Splunk Edge Processor

Organizations handling credit card transactions know that PCI DSS compliance is both critical and complex. The ...