Hi I think that this is due to Splunk's feature that it can manage only the whole buckets. This means that it can remove/delete the bucket when all data inside it is older than your retention time. Usually that leads to situation when you have some searchable events which are much older that what you have configured into indexes. Also all indexers have usually 3 open hot buckets with some default time (90days) before it rolls to then warm (or e.g. manually with REST or restart splunkd). As all Splunk Cloud instances has at least 3 indexers (usually more) this lead quite a many open hot buckets which contains older than X days data. Here is splunk ingest flow https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590781/highlight/true#M103485 where you can see how data goes between buckets. Here is old conf presentation https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf which cover this more detail level. It's little bit old, but mainly valid. In Splunk Cloud as all warm and cold data are in SmartStore there are some difference in detail level, but I think that you can get the idea from that presentation? r. Ismo
... View more