Getting Data In

Does maxVolumeDataSizeMB in indexes.conf do anything?

eugenekogan
Explorer

As far as I can tell, setting maxVolumeDataSizeMB does not trigger bucket moves and has no impact at all. Does anyone use this setting successfully, and can help me understand how it actually works? The documentation has not proven useful. Thanks!

Tags (1)
0 Karma
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

Which version of Splunk, on what Platform? I don't know of any defects on the behavior that presently exist.

per indexes.conf.spec:

maxVolumeDataSizeMB =
* Optional.
* If set, this attribute will limit the total cumulative size of all databases
that reside on this volume to the maximum size specified, in MB.

* If the size is exceeded, Splunk will remove buckets with the oldest value of latest time (for a given bucket)
across all indexes in the volume, until the volume is below the maximum size.

Note that this can cause buckets to be frozen directly from a warm DB, if those
buckets happen to have the oldest value of latest time across all indexes in the volume.

It is important to understand that this value acts on the aggregate size of indexes which reference the volume on the instance where this setting exists. What I mean by this is this isn't the equivalent of doing 'df -k' and seeing that the volume's size is over the specified threshold. So, if you've got other data there, such as frozen buckets or you're using that disk for other types of storage, Splunk won't take that into consideration.

SPL-50187 was filed to make this feature more clear in the documentation.

splunkd should log messages with the 'BucketMover' component if something exceeds a trigger which has been set, so it would be a good idea to review it and get an idea of what is happening. What is the specific behavior you've observed that makes you think this doesn't work?

View solution in original post

trademarq
Explorer

Hi

maxVolumeDataSizeMB only affects you if you have a volume set up in indexes.conf and an index (also in indexes.conf) configured to use it. Then, all of your configured indexes will adhere to using the volume's mountpoint for their storage, and you could use maxVolumeDataSizeMB to regulate how big you wish to allow the volume to get as a whole. Note you can use maxTotalDataSizeMB in your index stanza to regulate size on a per index basis as well. My guess is you don't have maxTotalDataSizeMB on an index set. A piece of my indexes.conf is below - just note that I never roll things over to cold or frozen, I keep things hot or warm 😉

# Set up shared disk pool for indexers
[volume:hotwarm]
path = $SPLUNK_DB
maxVolumeDataSizeMB = 900000

# Main indexes
[main]
homePath = volume:hotwarm/defaultdb/db
coldPath = volume:hotwarm/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb
maxTotalDataSizeMB = 850000

In english - my main indexes are stored in a volume called hotwarm. Volume hotward size limit - 900,000MB, but the main index in that volume can only be 850,000MB (to allow space for some other smaller indexes I have).

gkanapathy
Splunk Employee
Splunk Employee

show us the config file for your indexex.conf.

0 Karma

jbsplunk
Splunk Employee
Splunk Employee

Which version of Splunk, on what Platform? I don't know of any defects on the behavior that presently exist.

per indexes.conf.spec:

maxVolumeDataSizeMB =
* Optional.
* If set, this attribute will limit the total cumulative size of all databases
that reside on this volume to the maximum size specified, in MB.

* If the size is exceeded, Splunk will remove buckets with the oldest value of latest time (for a given bucket)
across all indexes in the volume, until the volume is below the maximum size.

Note that this can cause buckets to be frozen directly from a warm DB, if those
buckets happen to have the oldest value of latest time across all indexes in the volume.

It is important to understand that this value acts on the aggregate size of indexes which reference the volume on the instance where this setting exists. What I mean by this is this isn't the equivalent of doing 'df -k' and seeing that the volume's size is over the specified threshold. So, if you've got other data there, such as frozen buckets or you're using that disk for other types of storage, Splunk won't take that into consideration.

SPL-50187 was filed to make this feature more clear in the documentation.

splunkd should log messages with the 'BucketMover' component if something exceeds a trigger which has been set, so it would be a good idea to review it and get an idea of what is happening. What is the specific behavior you've observed that makes you think this doesn't work?

gkanapathy
Splunk Employee
Splunk Employee

Yes. Another way to put it is that "volume" does not refer to your OS disk volumes. Rather it refers to the "volumes" as defined within indexes.conf, and things only are considered to live on a "volume" if defined as such in indexes.conf. That's probably what's misleading.

0 Karma

eugenekogan
Explorer

Sorry, forgot to say this is for Splunk 4.3.1 on RHEL5.

0 Karma

eugenekogan
Explorer

I set the maxVolumeDataSizeMB value to 90% of a disk partition, but it filled up to 100%, and no buckets were moving from warm to cold. The partition is dedicated to Splunk, so there are no other files in there.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...