Deployment Architecture

What is Index Retention Policy for maximum storage usage?

daisy
Explorer

hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the total index volume. We would like to keep the data for as long as possible, however have limited storage. For simplicity, let's say we have 1 TB storage and a single instance, 10 indexes. As far as I understood, it would be best to choose MaxTotalDataSizeMB to set the max MB per index. However, I can't divide the space of 1TB per index as only some of the space can be taken up by indexed data. So my questions are:

1) How should I choose what the MaxTotalDataSizeMB per index is?

2) How can I use to the maximum server storage without getting Splunk problems? 

3) Is it reasonable to calculate the total index storage by looking at the total storage outside of  /opt/splunk/var/lib directory and then deciding how much storage can be allocated to indexes? What approach do you recommend? 

4) What approach would you recommend in my case? Is it reasonable to keep data for as long as possible and are there reasons for avoiding this approach?

Labels (1)
Tags (2)
0 Karma
Get Updates on the Splunk Community!

Sending Metrics to Splunk Enterprise With the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...

What's New in Splunk Cloud Platform 9.0.2208?!

Howdy!  We are happy to share the newest updates in Splunk Cloud Platform 9.0.2208! Analysts can benefit ...

Want a chance to win $500 to the Splunk shop? Take our IT Incident Management Survey!

  Top Trends & Best Practices in Incident ManagementSplunk is partnering up with Constellation Research to ...