Hello,
I've below configuration for one index.
maxTotalDataSizeMB = 333400
maxDataSize = auto_high_volume
homePath = volume:hotwarm_cold/authentication/db
coldPath = volume:hotwarm_cold/authentication/colddb
thawedPath = /splunk/data2/authentication/thaweddb
coldToFrozenDir = /splunk/data2/authentication/frozendb
tstatsHomePath = volume:hotwarm_cold/authentication/datamodel_summary
homePath.maxDataSizeMB = 116700
coldPath.maxDataSizeMB = 216700
maxWarmDBCount = 4294967295
frozenTimePeriodInSecs = 2592000
repFactor = auto
Current log volume for this index is 3GB/day. Due to change in requirements, the log volume will increase to ~15GB/day and log retention period will change to 60 days. Could you tell how maxTotalDataSizeMB, homePath.maxDataSizeMB, coldPath.maxDataSizeMB and maxWarmDBCount will be calculated and how the calculation changes with data volume and retention period?
The short answer is that you cannot accurately predict what the maxTotalDataSizeMB will be. It will vary based on event count, event size, indexed size, etc.
In the use case you describe, I think it would be best to set maxTotalDataSizeMB to unlimited (or a very high value), set frozenTimePeriodInSecs to 5184000 (60 days), and remove both coldPath.maxDataSizeMB and coldToFrozenDir.
That will ensure you can handle the new log volume coming in and events older than 60 days will be deleted.