I use "maxHotSpanSecs" to cut the size of each bucket received.
Only join "maxHotSpanSecs = 2592000" (30d) in test of local/indexes.conf
(index=test)
Execution results: Each bucket is greater than 30 days . EX: one bucket :2016/04/01 ~2017/02/08
Do not know why the cutting is unsuccessful?
TKS.
how long have you had these settings in place? You mention one bucket has data from 2016/04/01 ~2017/02/08.... how about all the buckets since you made this change?
Splunk will not go back in time and readjust buckets to your new boundaries. That is to say, if you didnt have these settings before, the buckets would have contained upwards of 10GB / 90 days whichever is greater. Also, fringe events can be indexed into the same bucket when they arrive out of order.
For example, if I have cold and warm buckets from 2015 & 2016, and a hot bucket for 2017... and events come in with timestamps from 2014... the 2014 events will be dropped into the hot bucket. Now this hot bucket will show it has data from 2014 - 2017. In new and large environments, this happens all the time as you on-board new data sources.
[test]
coldPath = $SPLUNK_DB/test/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/test/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/test/thaweddb
maxHotSpanSecs = 259200
maxWarmDBCount = 500
[test_1]
coldPath = $SPLUNK_DB/test_1/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/test_1/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/test_1/thaweddb
maxHotSpanSecs = 259200
maxHotBuckets = 1
maxWarmDBCount = 500
In the second stanza you set maxHotBuckets=1
which will do this:
NOTE: If you set maxHotBuckets to 1, Splunk attempts to send all
events to the single hot bucket and `maxHotSpanSecs` will not be
enforced