Deployment Architecture

How to move the data to colddb after 30 days

VatsalJagani
Motivator

I want to move to data from hot/warm buckets to colddb (as that is in a different location in the end).

I've checked indexes.conf definition:

maxHotSpanSecs = <positive integer>
* Upper bound of timespan of hot/warm buckets, in seconds.

I tried changing the above setting, but I don't see data being moved to a new location (colddb) after pushing the configuration.

 

I found the below question on the community, but this is a too old question from 2014, so I want to confirm this before applying to Production.

https://community.splunk.com/t5/Getting-Data-In/How-send-indexed-data-older-than-3-months-to-colddb-...

maxHotSpanSecs = 86400
maxHotBuckets = 3
maxWarmDBCount = 30

 

What configuration should apply to achieve the above requirement?

Do Splunk will automatically move buckets to colddb on restart or we need to perform any manual steps?

 

Labels (1)
Tags (3)
0 Karma
1 Solution

VatsalJagani
Motivator

Here are only two options that Splunk provides to specify when Splunk should move buckets from Warm Bucket to Cold Bucket.

 

homePath.maxDataSizeMB

  • Specifies the maximum size of 'homePath' (which contains hot and warm buckets).
  • If this size is exceeded, splunkd moves buckets with the oldest value of latest time (for a given bucket) into the cold DB until homePath is below the maximum size.

maxWarmDBCount

  • The maximum number of warm buckets.
  • Default - 300

 

In my case applying maxWarmDBCount setting to 3 large indexes solved the storage issue.

View solution in original post

VatsalJagani
Motivator

Here are only two options that Splunk provides to specify when Splunk should move buckets from Warm Bucket to Cold Bucket.

 

homePath.maxDataSizeMB

  • Specifies the maximum size of 'homePath' (which contains hot and warm buckets).
  • If this size is exceeded, splunkd moves buckets with the oldest value of latest time (for a given bucket) into the cold DB until homePath is below the maximum size.

maxWarmDBCount

  • The maximum number of warm buckets.
  • Default - 300

 

In my case applying maxWarmDBCount setting to 3 large indexes solved the storage issue.

View solution in original post

VatsalJagani
Motivator

@richgalloway  - I've discussed with some other Splunk Admins and got to know exactly what you mentioned. There is no perfect way to limit data in a warm bucket by time.

But in my case, the issue was with storage full. Hence for that, I got to know an alternate option for that with Volumes.

 

Here is how:

# Volumes definition
[volume:hotvolume]
path = /hotstorge/splunk
maxVolumeDataSizeMB = 256000

[volume:cold]
path = /coldstorage/splunk

# Index definition
[myindex]
coldPath = volume:cold/myindex/colddb
homePath = volume:hot/myindex/db
thawedPath = /coldstorage/splunk/myindex/thaweddb

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

If your problem is resolved, then please click an "Accept as Solution" button to help future readers.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

richgalloway
SplunkTrust
SplunkTrust

Don't forget about warm buckets.  Hot buckets generally roll to warm before they roll to cold.  When Splunk restarts, all hot buckets become warm buckets.

While there are time constraints on how long a bucket remains hot and when a bucket is frozen, there are no time constraints on warm buckets.  Only size and count control when a warm bucket becomes a cold bucket.

---
If this reply helps you, an upvote would be appreciated.
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!