Getting Data In

Size of data on 64bit systems before being rolled to cold

eqalisken
Explorer

Hi,

Quick question, with maxDataSize on a 64bit system set to auto_high_volume = 10GB and the default number of buckets being 300 then presumably data starts getting rolled to cold at around 3TB. Is maxTotalDataSizeMB still set to 0.5TB? If so data would normally never get to cold? Thanks

Tags (1)

Simeon
Splunk Employee
Splunk Employee

It is possible that data will never transition to the cold db. With the default bucket settings in the main/default index, a 3 TB system may never transition to cold. By the time you get close to the 3 TB disk limit, your minFreeSpace (set in server.conf) will dictate if Splunk will stop indexing.

If you have a system that has 500 GB available on the root partition and a large amount (maybe 3 TB) on an NFS mount, we recommend you leverage the cold db on the NFS mount and tune the warm buckets to fit the root partition. This is done because you typically have higher read/write performance on the local disk. So in this particular scenario, with one main index, you would tune at least 5-10 hot buckets, 30+ warm buckets, and the remainder to exist on the cold db.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...