I have some confusion on calculating maxTotalDataSizeMB for configuring in indexes.conf file. Below are the details:
Daily Data volume: 400GB
Retention Period: 90 days
Number of indexers in cluster: 20
Search Factor: 2
Replication Factor: 3
What will be the value of maxTotalDataSizeMB parameter in indexes.conf file for a particular index. Will it be (400901024)MB or 400901024 divide by 20 indexers. If maxTotalDataSizeMB is low then data will be deleted before retention. What is the optimum size for this?
homePath = volume:primary/index/db
coldPath = volume:primary/index/colddb
thawedPath = $SPLUNKDB/index/thaweddb
tstatsHomePath = volume:primary/index/datamodelsummary
maxTotalDataSizeMB = 36864000????
frozenTimePeriodInSecs = 7776000
Thanks somesoni2 for your response. Does this mean the value of "(per Indexer)" should be set as maxTotalDataSizeMB ?
If the 400GB daily ingestion is size of data for single index than yes, the maxTotalDataSizeMB should be set to the value of per indexer. Actually if you scroll down, it give you that value as well (configuration file entry).
There are a few things to be aware of:
maxTotalDataSizeMBis the maximum size of an index per indexer and includes the storage required to store replicated buckets from other cluster peers. Your setting of 36864000 allows each indexer to store up to approximately 37 TB.
maxDataSize = auto_high_volumewhich will allow larger, more reasonably-sized buckets for your incoming volume.
It would be best to have a few days of data indexed to extrapolate from when setting
maxTotalDataSizeMB. If you're not at risk of filling the underlying disk, consider setting
maxTotalDataSizeMB to a large value and monitoring using the Distributed Management Console "Index Detail: Deployment" interface that will show you how many days of data you have on each indexer and how much storage each is using for a given index. You can adjust it down later.
In practice, you won't want to set
maxTotalDataSizeMB too closely to your minimum requirements. The fact that this value includes replicated buckets means that you'll need more storage to absorb the impact of cluster member failure. Splunk will not automatically clean up the extra replicated buckets that result from these situations and it can have a surprising effect on index size. It's crucial to respond quickly to unplanned outages and use maintenance mode on the cluster when performing planned maintenance.