- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi there,
Im looking at sizing an environment for SmartStore - does anyone have a formula or speadsheet that will factor in my storage needs for the entire clusters local storage, including smartstore cache?
Thanks!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Remote Object Store sizing = Daily Ingest Rate x Compression Ratio x Retention period
Compression ratio is generally 50% (15% from the compression of rawdata and 35% from the tsidx metadata files) but this is entirely dependent on the type of data. For higher cardinality data, this percentage can go down resulting in lower compressed data or increase in the storage sizing requirement.
Global Cache sizing = Daily Ingest Rate x Compression Ratio x (RF x Hot Days + (Cached Days - Hot Days))
Cache sizing per indexer = Global Cache sizing / No.of indexers
Cached Days = Splunk recommends 30 days for Splunk Enterprise and 90 days for Enterprise Security
Hot days = Number of days before hot buckets roll over to warm buckets. Ideally this will be between 1 and 7 but configure this based on how hot buckets rolls in your environment.
read here:
https://docs.splunk.com/Documentation/Splunk/8.0.0/Indexer/SmartStoresystemrequirements
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/SmartStorearchitecture
answer is from here:
https://answers.splunk.com/answers/764258/smartstore-how-to-calculate-storage-requirements-f-1.html
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Remote Object Store sizing = Daily Ingest Rate x Compression Ratio x Retention period
Compression ratio is generally 50% (15% from the compression of rawdata and 35% from the tsidx metadata files) but this is entirely dependent on the type of data. For higher cardinality data, this percentage can go down resulting in lower compressed data or increase in the storage sizing requirement.
Global Cache sizing = Daily Ingest Rate x Compression Ratio x (RF x Hot Days + (Cached Days - Hot Days))
Cache sizing per indexer = Global Cache sizing / No.of indexers
Cached Days = Splunk recommends 30 days for Splunk Enterprise and 90 days for Enterprise Security
Hot days = Number of days before hot buckets roll over to warm buckets. Ideally this will be between 1 and 7 but configure this based on how hot buckets rolls in your environment.
read here:
https://docs.splunk.com/Documentation/Splunk/8.0.0/Indexer/SmartStoresystemrequirements
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/SmartStorearchitecture
answer is from here:
https://answers.splunk.com/answers/764258/smartstore-how-to-calculate-storage-requirements-f-1.html
