Deployment Architecture

How do I enforce disk usage on volumes by index?

Branden
Builder

Hello.

I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders.

There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is HDD for cold buckets.

I'm trying to configure Splunk such at that an index ("test-index") will only consume, say, 10 MB of the SSD volume. After it hits that threshold, the oldest hot/warm bucket should roll over to the slower HDD volume.

I've done various tests, but when the index's 10 MB SSD threshold is reached, all of the buckets are rolled over to the cold storage, leaving SSD empty.

Here is how indexes.conf is set now:

 

[volume:hot_buckets]
path = /srv/ssd
maxVolumeDataSizeMB = 430000

[volume:cold_buckets]
path = /srv/hdd
maxVolumeDataSizeMB = 11000000

[test-index]
homePath = volume:hot_buckets/test-index/db
coldPath = volume:cold_buckets/test-index/colddb
thawedPath = /srv/hdd/test-index/thaweddb
homePath.maxDataSizeMB = 10

 

When the 10 MB threshold is reached, why is everything in hot/warm rolling over to cold storage? I had expected 10 MB of data to remain in hot/warm, with only the older buckets rolling over to cold.

I've poked around and found a other articles related to maxDataSizeMB, but those questions do not align with what I'm experiencing.

Any guidance is appreciated. Thank you!

Labels (2)
Tags (1)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

10MB is really a very low limit. If you have smaller buckets than that, Splunk will complain about small buckets.

But it should work like this:

Splunk creates a number of hot buckets for an index.

If a hot bucket grows too big or too idle it gets rolled to warm.

If there are too many warm buckets or the homePath.maxDataSizeMB is exceeded, oldest (the one which earliest event is oldest) bucket is rolled to cold.

When the latest event in a cold bucket is getting older than retention period, that bucket is getting rolled to frozen. Also when the size limit of the coldPath.maxDataSizeMB or maxTotalDataSizeMB is reached, oldest bucket is rolled to frozen.

At any time if a volume size limit is exceeded Splunk rolls the oldest bucket from the whole volume (not including hot buckets as far as I remember) to the next state.

So your settings look pretty sound. It's just that 10MB is way too low.

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

What do you mean "all buckets are rolled to cold"? When you have a 10MB limit for hot/warm storage how many buckets do you expect?

0 Karma

Branden
Builder

Thank you for your reply.

Maybe I am not understanding.

I arbitrarily used "10 MB" as the limit so that I could quickly test this concept without repeatedly indexing large amount of logs.

I'm not an expert on how all of this works, but from your reponse I get the impression that "10 MB" was probably too small of a setting to experiment with.

Just to clarify, per index, I want X GB stored on SSD. Once that X GB is reached, older data should begin rolling over to HDD (cold) storage. That is what I'm trying to accomplish. This way, only 'younger' data will be stored on the expensive SSD disks.

Thank you!

0 Karma

PickleRick
SplunkTrust
SplunkTrust

10MB is really a very low limit. If you have smaller buckets than that, Splunk will complain about small buckets.

But it should work like this:

Splunk creates a number of hot buckets for an index.

If a hot bucket grows too big or too idle it gets rolled to warm.

If there are too many warm buckets or the homePath.maxDataSizeMB is exceeded, oldest (the one which earliest event is oldest) bucket is rolled to cold.

When the latest event in a cold bucket is getting older than retention period, that bucket is getting rolled to frozen. Also when the size limit of the coldPath.maxDataSizeMB or maxTotalDataSizeMB is reached, oldest bucket is rolled to frozen.

At any time if a volume size limit is exceeded Splunk rolls the oldest bucket from the whole volume (not including hot buckets as far as I remember) to the next state.

So your settings look pretty sound. It's just that 10MB is way too low.

Get Updates on the Splunk Community!

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...