I am testing out a storage solution for frozen data. I don't have a large splunk roll out while testing so I want to get data into my frozen bucket as soon as possible. Can I safely reduce the default value (500,00MB) in maxTotalDataSizeMB to get data from Cold into Frozen?
I know I have to reduce other attributes to (e.g. maxWarmDBCount) too. Splunk Docs list an upper limit of setting the value. I would imagine seeing more writes to Frozen but I don't understand what the consequences potentially are.
The docs in indexes.conf also state that the attribute maxDataSize manages the size of all buckets but maxTotalDataSizeMB affects the data roll from cold to frozen. Is there a conflict if maxDataSize and maxTotalDataSizeMB are different values? Does one attribute take precedence over the other when it come to the cold bucket size?
The index's data retention (when you want your cold bucket to roll to frozen bucket) is decided by two factors,
1) Maximum size of the index : defined by maxTotalDataSizeMB (total size of all hot/warm/cold buckets). If the total size of the index on disk exceeds to size (default 500GB) set in maxTotalDataSizeMB, the oldest cold bucket will roll to frozen.
2) Maximum age of bucket/data: defined by frozenTimePeriodInSecs (age of newest data in a bucket). If the timestamp of latest data in a cold bucket (every bucket will store data for a range of timestamp, latest and oldest) is older than the time period set by frozeTimePeriodInSecs, that bucket will roll over to frozen.
If you want to move your data to frozen sooner, decide which option would be betters, rollover based on total Size or age of data. (or use combination of both).
The property maxDataSize defines the maximum size of hot bucket before it roll over to warm bucket, so it's the size of a bucket, not the index. (you may want to read the indexes.conf documentation again for these two properties maxDataSize and maxTotalDataSizeMB).