I would like to know what are the best practices to manage the index's size.
I read in this post ( https://www.splunk.com/blog/2011/01/03/managing-index-sizes-in-splunk.html ) that we must control the size using maxWarmDBCount and maxTotalDataSize , which are indexes.conf parameters.
But I know is possible to manage this using other two parameters homePath.maxDataSizeMB and coldPath.maxDataSizeMB, that apears to be easier than first configuration.
What is the best way to do it?
I think that it mainly depends by your retention requirements: you have to define how long logs must be searchable (maybe there are compliance requirements!) and then you have to configure your storage based on a capacity planning.
If you want it's possible to use the Monitoring Console as input for the Capacity Planning.
Retention can be manager with
frozenTimePeriodInSecs option in indexes.conf.
Thanks for your response.
I want to maintain my data available for two months and after this roll it to frozen.
My question is about the best way to configure this, since two presented configurations apears to work the same way in my opinion.
One of the best thing to refer would be @sloshburch .conf2017 session on Best Practices for Admins.
Also go through couple of Splunk Wiki Topics
Bingo! I'm a huge fan of defining volumes and the
maxVolumeDataSizeMB for that volume. Then also set
frozenTimePeriodInSecs per index. This means that (assuming I use the volumes in my index definition) I can limit the time period and size per index, but also prevent the overall filesystem from filling up the data volume per index changes drastically without my realizing it.