Is there a way to roll data from warm to cold with a time parameter? I searched and i didn't find a way. There is only this post in which they guess the time with help of maxWarmDBCount or homePath.maxDataSizeMB.
Imagine a company with the following Settings:
Hot/Warm is on SSD and Cold is on HDD
//Move Hot to Warm after 6 hours
maxHotSpanSecs = 21600
//Move Warm to Cold when size of Hot and Warm is over 3 TB.
homePath.maxDataSizeMB = 3000000
//Set maxWarmDBCount to highest number to overwrite standard value.
maxWarmDBCount = 4294967295
//Delete data after 6 months
frozenTimePeriodInSecs = 16070400
With this Setup the Logs will be rolled from Hot to Warm in 6 Hours with maxHotSpanSecs and lets imagine that with the size of daily logging the buckets will be rolled from Warm to Cold after 30 days with the homePath.maxDataSizeMB Setting. The company uses this big SSD storage for Hot/Warm, to look back at least 30 days with the fast speed of the SSDs. With the frozenTimePeriodInSecs all data will be deleted after 6 months.
Thats all fine up until now.
The company now creates a new index [SuperSensitiveData]. The logs in this index must be deleted after 7 days to be GDPR complient. The size of daily input in this index vary a lot. The Splunk admin creates following Stanza in indexes.conf.
Now the Data from SuperSensitiveData will be in Hot/Warm for 30 days before it is deleted right? Since the frozenTimePeriodInSecs Parameter in the SuperSensitiveData stanza only works for data in Cold storage and the data only gets there after 30 days.
And this can be even worse. If the company changes their environment and the amount of daily data is halved. The SuperSensitiveData will be there for 60 days.
Did i miss something there? Is there a way to solve this problem?