Deployment Architecture

How to manage the buckets in volumes configured for indexes?

keithyap
Path Finder

I have a question about managing the buckets in my volumes configured for indexes.

Below are my current configurations:

[volume:hotwarm]
path = /data/splunk/homedb
maxVolumeDataSizeMB = 900000

[volume:cold]
path = /data/splunk/colddb
maxVolumeDataSizeMB = 900000

[default]
maxDataSize = auto_high_volume
maxWarmDBCount = 80
frozenTimePeriodInSecs = 31104000
homePath.maxDataSizeMB = 800000
coldPath.maxDataSizeMB = 800000

Current data indexed per day is roughly 140GB per day and my hot/warm & cold are 1 TB each (I know they are severely under sized at the moment, and we are working to increase the space).

After implementing the configurations above, my understanding is that the warm buckets would start to roll to cold after hitting the homePath.maxDataSizeMB. However, currently the space utilization for the homePath is 900+GB. Did i make a mistake in my configurations? Any advice on how best to manage the indexes would be greatly appreciated.

Another question I have is regarding some of the parameters in indexes.conf
homePath.maxDataSizeMB - should this be set differently for each individual index? or would it be ok to set one value globally?
maxTotalDataSizeMB - like the above should this be set differently for each individual index?

Regards,
Keith Yap

0 Karma
1 Solution

gjanders
SplunkTrust
SplunkTrust
homePath.maxDataSizeMB - should this be set differently for each individual index? or would it be ok to set one value globally?
maxTotalDataSizeMB - like the above should this be set differently for each individual index? 

It depends, I've always set my indexes to the expected volume + 20-30% contingency, and I've set homePath.maxDataSizeMB , coldPath.maxDataSizeMB and maxTotalDataSizeMB where that last attribute is of course the sum of the first two.
I've also been in environments where only the maxTotalDataSizeMB is set and the volume sizing handles rolling to cold.

The reason to limit the maxTotalDataSizeMB is to prevent a situation where someone creates an infinite loop or a huge amount of logs which floods the index with data. This accident can result in a reduction of the available data for the other indexes on the system.
For example if your indexer has 1TB of storage, and you have 5 indexes with 500GB maxTotalDataSizeMB (each), and then the 1 bad index uses the full 500GB in 24 hours. This will result in the remaining 4 indexes needing to fit within the remaining 500GB. If that "bad index" was capped at 100GB then this is not a scenario you have to worry about.

After implementing the configurations above, my understanding is that the warm buckets would start to roll to cold after hitting the homePath.maxDataSizeMB. However, currently the space utilization for the homePath is 900+GB. Did i make a mistake in my configurations? Any advice on how best to manage the indexes would be greatly appreciated.

That sounds valid, I've never tested setting homePath.maxDataSizeMB without setting the maxTotalDataSizeMB as well. Are you saying on a per-index/per-indexer basis it's exceeding?

Note that the numbers apply per-index, per indexer/search peer.

Also make sure you have read configure index storage

View solution in original post

gjanders
SplunkTrust
SplunkTrust
homePath.maxDataSizeMB - should this be set differently for each individual index? or would it be ok to set one value globally?
maxTotalDataSizeMB - like the above should this be set differently for each individual index? 

It depends, I've always set my indexes to the expected volume + 20-30% contingency, and I've set homePath.maxDataSizeMB , coldPath.maxDataSizeMB and maxTotalDataSizeMB where that last attribute is of course the sum of the first two.
I've also been in environments where only the maxTotalDataSizeMB is set and the volume sizing handles rolling to cold.

The reason to limit the maxTotalDataSizeMB is to prevent a situation where someone creates an infinite loop or a huge amount of logs which floods the index with data. This accident can result in a reduction of the available data for the other indexes on the system.
For example if your indexer has 1TB of storage, and you have 5 indexes with 500GB maxTotalDataSizeMB (each), and then the 1 bad index uses the full 500GB in 24 hours. This will result in the remaining 4 indexes needing to fit within the remaining 500GB. If that "bad index" was capped at 100GB then this is not a scenario you have to worry about.

After implementing the configurations above, my understanding is that the warm buckets would start to roll to cold after hitting the homePath.maxDataSizeMB. However, currently the space utilization for the homePath is 900+GB. Did i make a mistake in my configurations? Any advice on how best to manage the indexes would be greatly appreciated.

That sounds valid, I've never tested setting homePath.maxDataSizeMB without setting the maxTotalDataSizeMB as well. Are you saying on a per-index/per-indexer basis it's exceeding?

Note that the numbers apply per-index, per indexer/search peer.

Also make sure you have read configure index storage

keithyap
Path Finder

@gjanders Thanks for the reply and advice. Currently the configurations have been set as a global attribute for both indexers (push via Master Node).

Based on what you wrote above, I have done the below:

In the Splunk infra I have now, there are 14 configured indexes. Since the current storage space available is only 1 TB, I divided it among the 14 indexes which is about 74 GB per index for hot/warm and another 74 GB for cold, and for the maxTotalDataSizeMB I have set 148 GB. Also I have set it in the [default] stanza so the configurations will be global for all the indexes.

[volume:hotwarm]
 path = /data/splunk/homedb
 maxVolumeDataSizeMB = 900000

 [volume:cold]
 path = /data/splunk/colddb
 maxVolumeDataSizeMB = 900000

 [default]
 maxDataSize = auto_high_volume #About 10GB per bucket
 maxWarmDBCount = 80 #10gb * 80buckets = 800GB
 frozenTimePeriodInSecs = 31104000 # Will change this at a later date
 homePath.maxDataSizeMB = 74000 # 74GB 
 coldPath.maxDataSizeMB = 74000 # 74GB
maxTotalDataSizeMB = 151552 # 148 GB

Hopefully I am understanding it correctly.

0 Karma

gjanders
SplunkTrust
SplunkTrust

That looks fine except the maxWarmDBCount. Using auto high volume means the buckets must roll if they reach 10gb, that does not mean they will reach 10gb so I wouldn't set this, defaults are fine here.

Also auto high volume should be used with indexes receiving a lot of data per day...

So just check you want that setting, use auto if your not sure.

Finally the settings you have posted assume all indexes are equal, if some have longer or shorter retention times or more or less data per day you might want to customise further

keithyap
Path Finder

Awesome! Thanks @gjanders
Currently my retention is the same for all my data
I will further tweak the configurations after i evaluate how much each index receives per day.

Thanks Again!

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...