I understand there are four buckets in which datas are rolled out in splunk before deleting - Hot,Warm,Cold and Frozen
In the document it is said - to delete the data by maximum of ages we have to specify the value in indexes.conf and by default it is 250000MB. Does that mean each and every bucket will hold the data till it reaches 250 GB - Say Hot bucket will hold the data till it reaches 250GB then it transfers to Warm and then Warm will hold it reaches 250 GB and it goes on like that?
is that 250 GB applicable only for Frozen bucket if so on what condition other buckets will roll out the data,Can any one please clarify?
Actually there are 4 stages in which your buckets can be; hot, warm, cold and frozen. Typically there are multiple buckets in each stage. I couldn't find any reference to 250GB default from indexes.conf but maybe if you can provide a link to the docs you were reading someone can explain it. Meanwhile here is some good reading to help calculating how much storage Splunk indexes will need (or how to match Splunk config with storage resources)
Thank for the reply
If you look at the above link it says maxTotalDataSize MB=250000 - so my question was ,will the indexer hold the data till it reaches the value specified in maxTotalDataSize? (I'm yet to read the link you've given me)
As it says in docs, if/when your index reaches maxTotalDataSize (that is total size of hot, warm and cold data) Splunk will delete data starting from the oldest. The catch is, your index may never reach this size as there are other controls that will freeze your data (ie delete it) before the total size goes up to maxTotalDataSize.
Actually the default maximum size(maxTotalDataSizeMB) for an index is 500,000MB/500GB.
Frozen means deleted or exported thus not available to splunk.