Getting Data In
Highlighted

Index Size : Calculate maxTotalDataSizeMB

Contributor

Hello Team,

I have some confusion on calculating maxTotalDataSizeMB for configuring in indexes.conf file. Below are the details:

Daily Data volume: 400GB
Retention Period: 90 days
Number of indexers in cluster: 20
Search Factor: 2
Replication Factor: 3

What will be the value of maxTotalDataSizeMB parameter in indexes.conf file for a particular index. Will it be (400901024)MB or 400901024 divide by 20 indexers. If maxTotalDataSizeMB is low then data will be deleted before retention. What is the optimum size for this?

[index]
homePath = volume:primary/index/db
coldPath = volume:primary/index/colddb
thawedPath = $SPLUNKDB/index/thaweddb
tstatsHomePath = volume:primary/index/datamodel
summary
maxTotalDataSizeMB = 36864000????
frozenTimePeriodInSecs = 7776000

Thanks
Hemendra

0 Karma
Highlighted

Re: Index Size : Calculate maxTotalDataSizeMB

SplunkTrust
SplunkTrust

You can use the Splunk sizing web app to calculate the same. Here is an example per your configuration.

https://splunk-sizing.appspot.com/#ar=0&c=1&cr=90&hwr=7&i=20&rf=2&v=400

View solution in original post

Highlighted

Re: Index Size : Calculate maxTotalDataSizeMB

Contributor

Thanks somesoni2 for your response. Does this mean the value of "(per Indexer)" should be set as maxTotalDataSizeMB ?

0 Karma
Highlighted

Re: Index Size : Calculate maxTotalDataSizeMB

SplunkTrust
SplunkTrust

If the 400GB daily ingestion is size of data for single index than yes, the maxTotalDataSizeMB should be set to the value of per indexer. Actually if you scroll down, it give you that value as well (configuration file entry).

0 Karma
Highlighted

Re: Index Size : Calculate maxTotalDataSizeMB

Contributor

thanks for the info.

0 Karma
Highlighted

Re: Index Size : Calculate maxTotalDataSizeMB

Builder

There are a few things to be aware of:

  1. Incoming data volume of 400 GB from a license/raw data standpoint does not necessarily equate to 400 GB of storage required on disk. It could be more or less depending on the type of data, segmentation settings, and possibly other factors.
  2. maxTotalDataSizeMB is the maximum size of an index per indexer and includes the storage required to store replicated buckets from other cluster peers. Your setting of 36864000 allows each indexer to store up to approximately 37 TB.
  3. At that high data volume, you may end up with thousands of buckets for this index alone. You may want to experiment with maxDataSize = auto_high_volume which will allow larger, more reasonably-sized buckets for your incoming volume.

It would be best to have a few days of data indexed to extrapolate from when setting maxTotalDataSizeMB. If you're not at risk of filling the underlying disk, consider setting maxTotalDataSizeMB to a large value and monitoring using the Distributed Management Console "Index Detail: Deployment" interface that will show you how many days of data you have on each indexer and how much storage each is using for a given index. You can adjust it down later.

In practice, you won't want to set maxTotalDataSizeMB too closely to your minimum requirements. The fact that this value includes replicated buckets means that you'll need more storage to absorb the impact of cluster member failure. Splunk will not automatically clean up the extra replicated buckets that result from these situations and it can have a surprising effect on index size. It's crucial to respond quickly to unplanned outages and use maintenance mode on the cluster when performing planned maintenance.

Highlighted

Re: Index Size : Calculate maxTotalDataSizeMB

Contributor

Thanks jtacy for the useful information. We will check on this.

0 Karma