@rbal_splunk Looks like the specs have changed since you wrote this article. Seems like your description of maxGlobalDataSizeMB is now called maxGlobalRawDataSizeMB and maxGlobalDataSizeMB seems to apply to Smart Store. maxGlobalRawDataSizeMB = <nonnegative integer>
* The maximum amount of cumulative raw data (in MB) allowed in a remote
storage-enabled index.
* This setting is available for both standalone indexers and indexer clusters.
In the case of indexer clusters, the raw data size is calculated as the total
amount of raw data ingested for the index, across all peer nodes.
* When the amount of uncompressed raw data in an index exceeds the value of this
setting, the bucket containing the oldest data is frozen.
* For example, assume that the setting is set to 500 and the indexer cluster
has already ingested 400MB of raw data into the index, across all peer nodes.
If the cluster ingests an additional amount of raw data greater than 100MB in
size, the cluster freezes the oldest buckets, until the size of raw data
reduces to less than or equal to 500MB.
* This value applies to warm and cold buckets. It does not
apply to hot or thawed buckets.
* The maximum allowable value is 4294967295.
* Default: 0 (no limit to the amount of raw data in an index)
maxGlobalDataSizeMB = <nonnegative integer>
* The maximum size, in megabytes, for all warm buckets in a SmartStore
index on a cluster.
* This setting includes the sum of the size of all buckets that reside
on remote storage, along with any buckets that have recently rolled
from hot to warm on a peer node and are awaiting upload to remote storage.
* If the total size of the warm buckets in an index exceeds
'maxGlobalDataSizeMB', the oldest bucket in the index is frozen.
* For example, assume that 'maxGlobalDataSizeMB' is set to 5000 for
an index, and the index's warm buckets occupy 4800 MB. If a 750 MB
hot bucket then rolls to warm, the index size now exceeds
'maxGlobalDataSizeMB', which triggers bucket freezing. The cluster
freezes the oldest buckets on the index, until the total warm bucket
size falls below 'maxGlobalDataSizeMB'.
* The size calculation for this setting applies on a per-index basis.
* The calculation applies across all peers in the cluster.
* The calculation includes only one copy of each bucket. If a duplicate
copy of a bucket exists on a peer node, the size calculation does
not include it.
* For example, if the bucket exists on both remote storage and on a peer
node's local cache, the calculation ignores the copy on local cache.
* The calculation includes only the size of the buckets themselves.
It does not include the size of any associated files, such as report
acceleration or data model acceleration summaries.
* The highest legal value is 4294967295 (4.2 petabytes.)
* Default: 0 (No limit to the space that the warm buckets on an index can occupy.) We just completed a rather painful migration to SmartStore and now we and reviewing retention settings and seeing how to control growth. Your Smart Store articles have been super helpful to us in solving a number of our problems. Do you have any additional recommendation/insights in how to manage storage in the Smart Store world?
... View more