Quick question, with maxDataSize on a 64bit system set to auto_high_volume = 10GB and the default number of buckets being 300 then presumably data starts getting rolled to cold at around 3TB. Is maxTotalDataSizeMB still set to 0.5TB? If so data would normally never get to cold? Thanks
It is possible that data will never transition to the cold db. With the default bucket settings in the main/default index, a 3 TB system may never transition to cold. By the time you get close to the 3 TB disk limit, your minFreeSpace (set in server.conf) will dictate if Splunk will stop indexing.
If you have a system that has 500 GB available on the root partition and a large amount (maybe 3 TB) on an NFS mount, we recommend you leverage the cold db on the NFS mount and tune the warm buckets to fit the root partition. This is done because you typically have higher read/write performance on the local disk. So in this particular scenario, with one main index, you would tune at least 5-10 hot buckets, 30+ warm buckets, and the remainder to exist on the cold db.