Deployment Architecture

What is Index Retention Policy for maximum storage usage?

daisy
Explorer

hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the total index volume. We would like to keep the data for as long as possible, however have limited storage. For simplicity, let's say we have 1 TB storage and a single instance, 10 indexes. As far as I understood, it would be best to choose MaxTotalDataSizeMB to set the max MB per index. However, I can't divide the space of 1TB per index as only some of the space can be taken up by indexed data. So my questions are:

1) How should I choose what the MaxTotalDataSizeMB per index is?

2) How can I use to the maximum server storage without getting Splunk problems? 

3) Is it reasonable to calculate the total index storage by looking at the total storage outside of  /opt/splunk/var/lib directory and then deciding how much storage can be allocated to indexes? What approach do you recommend? 

4) What approach would you recommend in my case? Is it reasonable to keep data for as long as possible and are there reasons for avoiding this approach?

Labels (1)
Tags (2)
0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...