Splunk Enterprise

Thaw/Rebuild Data Error - Cannot Accommodate Maximum Number of Hot Buckets

Path Finder

We are using coldToFrozenScript to store frozen Index data in GCS. To prove our DR annually we need to restore. This is the first time I have done so at this company and ran into an error that pukes out when I run the rebuild command, however, I will say that the data appears to show up in Splunk and is searchable. So, I'm wondering is this error something that can be dismissed, or is it something that I should pay attention to?


WARN IndexConfig - Home path size limit cannot accommodate maximum number of hot buckets with specified bucket size because homePath.maxDataSizeMB is too small. Please check your index configuration: idx=linux maxDataSize=750 MB, homePath.maxDataSizeMB=800 MB

The indexes.conf for this index is as follows:
repFactor = auto
homePath = volume:indexvol001/$_index_name/db
coldPath = volume:cold/$_index_name/colddb
thawedPath = $SPLUNK_DB/linux/thaweddb
frozenTimePeriodInSecs = 31536000
homePath.maxDataSizeMB = 800
maxTotalDataSizeMB = 491789400
maxWarmDBCount = 285

Labels (2)
0 Karma

Revered Legend

Any specific reason you've overridden value of homePath.maxDataSizeMB = 800 for this index? 


homePath.maxDataSizeMB = <nonnegative integer>
* Specifies the maximum size of 'homePath' (which contains hot and warm

Your current value is very small. That folder contains all hot and warm buckets (by default 3 hot bucket and 300 warm buckets, each bucket could size is 750MB (or 10 GB for auto_high_volume setting). You should leave it with default value of 0.

Path Finder

It was set at 600, I changed it to 800 due to the error, but upon restarting the Splunk daemon after the rebuild command, and then searching for the restored data and finding it, I question if the error is valid?

The number was set by another admin prior to me. I'm not sure why it is there. We process about 10TB per day, and started with small cloud storage. Our Cold volume is 5x as large as our Hot and it's possible we needed to hurry data out of the Hot/Warm volume due to capacity issues. Also, there may have been some micro-management of Indexes at some point for charge-back purposes, so how long it stayed on disk may have cost people more and they chose to move it off to Coldline/Nearline GCS.

I will definitely take your advice/notes back to the team and see what thoughts are on reverting it to the default value.  Thank you for your input!

0 Karma
Get Updates on the Splunk Community!

Database Performance Sidebar Panel Now on APM Database Query Performance & Service ...

We’ve streamlined the troubleshooting experience for database-related service issues by adding a database ...

IM Landing Page Filter - Now Available

We’ve added the capability for you to filter across the summary details on the main Infrastructure Monitoring ...

Dynamic Links from Alerts to IM Navigators - New in Observability Cloud

Splunk continues to improve the troubleshooting experience in Observability Cloud with this latest enhancement ...