Deployment Architecture

Splunk size of hot/warm/cold bucket


Hello Splunkers

My setup:
1. I have 4 indexer cluster (not a multi-site)with a Search Head Cluster.
2. Each indexer is having 2TB Hard disk
3. I have a deployer server and Cluster master to manage the Cluster setups

Can anyone suggest me how can I configure the Log retention policy so that my 2TB HDD doesn't get exhausted? My intention is to use complete 2TB HDD for Hot and Worm buckets and once the data is ready to move to Cold/Frozen path I would like to transfer the logs to my Azure blobs. There is no specific plan with respect to number of days(log types) instead, its based on the amount of HDD which I have (2TB).

Thank you in advance.

0 Karma


I recommend configuring a single Volume for your Hot/Warm data:

maxVolumeDataSizeMB = Max size splunk will use on disk

Then for each index:

homePath = volume:hot_volume/sample_index/db

As long as all index on the drive use volume:hot_volume, Splunk will roll buckets as needed to ensure the volume remain below the maxVolumeDataSize.


Thank you solarboyz1

I would like to highlight that I am using only the "main" index.

One more thing u missed while suggesting is that transfer of frozen data to my Azure blobs.

0 Karma


I recommend you check out:

A volume can be used for a single index or multiple indexes.
We have hot and cold volumes defined, and pointed to hot (SSD) and cold (HDD) storage.

We have sized the storage based on our expected usage for a given time period (hot/warm - 30 days, cold - 90 days). We then use the volume to control the hot/warm/cold.

Since the indexes can fluctuate in usage, we have had to increase the maxTotalDataSizeMB for some indexes which are going to exceed the default setting. We set frozenTimePeriodInSecs to control the data retention.

If you are only using one index, you can also apply sizing controls to it:

homePath =
homePath.maxDataSizeMB =
coldPath.maxDataSizeMB =
maxTotalDataSizeMB =
frozenTimePeriodInSecs =

One more thing u missed while suggesting is that transfer of frozen data to my Azure blobs.

coldToFrozenScript =

Specifies a script to run when data is to leave the splunk index system.

* Essentially, this implements any archival tasks before the data is deleted out of its default location.
* Add "$DIR" (including quotes) to this setting on Windows (see below for details).
* Script Requirements: * The script must accept one argument:
* An absolute path to the bucket directory to archive.

You can check the following for tips on creating the script using theAWS CLI

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...