Building for the Splunk Platform

how to set 3 month searchable data retention on a high volume per day index, currently only seeing 2 weeks worth?

u788332
New Member

maxDataSize if set to auto the default is 750 MB and 10GB on 64 bit and 1GB on 32 bit hosts, if it is set to auto_high_volume, is that correct?

frozenTimePeriodInSecs = 8640000
maxWarmDBCount = 50
maxHotBuckets = 55
maxDataSize = auto
So going by the configuration for this index , the calculations would come to around (50+55)*750MB = 76GB, also even if we go by 10 GB it would come to (50+55)*10GB = 1050GB for searchable data.
monthly usage for this index, equates to 2906.571GB, if we go by (default 10GB bucket size) then searchable data would be just 1050 GB that does fit the observed two week retention.
how much would we need to increase the maxWarmDBCount to? To allow for Data to be searchable still for 100 days?

thanks in advance

Tags (1)
0 Karma

jkat54
SplunkTrust
SplunkTrust

Your fronzenTimePeriodInSecs is 100 days... 86400 (secs/day) * 100 = 8640000

Try this for 30 days retention:

Add 1 sec to avoid ohSnap

[indexName]
maxHotSpanSecs = 86401 #not 86400 to avoid ohSnap
maxHotIdleSecs = 86401
frozenTimePeriodInSecs = 2592000
...

This will make sure the hot buckets close at the end of a day instead of waiting for the hot buckets to fill before they roll.

The net effect is buckets from 31 days ago are rolled to frozen.

Get Updates on the Splunk Community!

Build Scalable Security While Moving to Cloud - Guide From Clayton Homes

 Clayton Homes faced the increased challenge of strengthening their security posture as they went through ...

Mission Control | Explore the latest release of Splunk Mission Control (2.3)

We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features ...

Cloud Platform | Migrating your Splunk Cloud deployment to Python 3.7

Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger ...