Hi,
The best approach is to specify a size for warm, and have a max bucket count in excess of what you would expect, so that the volume limit takes effect.
You can do this in two ways:
Set the homePath.maxDataSizeMB for the index
homePath.maxDataSizeMB =
* Limits the size of the hot/warm DB to the maximum specified size, in MB.
* If this size is exceeded, Splunk will move buckets with the oldest value of latest time (for a given bucket)
into the cold DB until the DB is below the maximum size.
* If this attribute is missing or set to 0, Splunk will not constrain size of the hot/warm DB.
* Defaults to 0.
Set up a volume, and use that for hot.
# volume definitions; prefixed with "volume:"
[volume:hot1]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000
[volume:cold1]
path = /mnt/big_disk
# maxVolumeDataSizeMB not specified: no data size limitation on top of the existing ones
[volume:cold2]
path = /mnt/big_disk2
maxVolumeDataSizeMB = 1000000
# index definitions
[idx1]
homePath = volume:hot1/idx1
coldPath = volume:cold1/idx1
Buckets are not guaranteed to reach their maximum size; they can roll over 'early' for a number of reasons. I'd slightly prefer the second option, as it's easier if you have multiple indexes.
Duncan
... View more