Deployment Architecture

What is Index Retention Policy for maximum storage usage?

daisy
Explorer

hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the total index volume. We would like to keep the data for as long as possible, however have limited storage. For simplicity, let's say we have 1 TB storage and a single instance, 10 indexes. As far as I understood, it would be best to choose MaxTotalDataSizeMB to set the max MB per index. However, I can't divide the space of 1TB per index as only some of the space can be taken up by indexed data. So my questions are:

1) How should I choose what the MaxTotalDataSizeMB per index is?

2) How can I use to the maximum server storage without getting Splunk problems? 

3) Is it reasonable to calculate the total index storage by looking at the total storage outside of  /opt/splunk/var/lib directory and then deciding how much storage can be allocated to indexes? What approach do you recommend? 

4) What approach would you recommend in my case? Is it reasonable to keep data for as long as possible and are there reasons for avoiding this approach?

Labels (1)
Tags (2)
0 Karma
Get Updates on the Splunk Community!

Prove Your Splunk Prowess at .conf25—No Prereqs Required!

Your Next Big Security Credential: No Prerequisites Needed We know you’ve got the skills, and now, earning the ...

Splunk Observability Cloud's AI Assistant in Action Series: Observability as Code

This is the sixth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...

Splunk Answers Content Calendar, July Edition I

Hello Community! Welcome to another month of Community Content Calendar series! For the month of July, we will ...