Getting Data In

Bucket retention and freezeTimePeriodInSec not working

jadengoho
Builder

Hi all,
I have this issue on our indexes- it seems frozenTimePeriodInSecs and maxHotSpanSecs is not working
Buckets are over the frozenTimePeriodInSecs but still on hot buckets.
alt text

alt text

  [skype]
    coldPath = volume:cold/skype/colddb
    coldPath.maxDataSizeMB = 400000
    coldToFrozenDir = $SPLUNK_HOME/frozen/skype
    frozenTimePeriodInSecs = 8035200
    homePath = volume:primary/skype/db
    homePath.maxDataSizeMB = 400000
    maxDataSize = auto_high_volume
    maxHotBuckets = 10
    maxHotSpanSecs = 7776000
    maxTotalDataSizeMB = 400000

Thanks in advance.

0 Karma
1 Solution

harsmarvania57
Ultra Champion

Hi @jadengoho,

It looks like you have timestamp issue or you are ingesting very old logs into Splunk and due to that Splunk is creating multiple hot buckets.

And based on my knowledge those buckets will roll from Hot to Warm when maxDataSize or maxHotSpanSecs whichever hit first and when you have more than 10 hot buckets based on your configuration (which means when 11th hot bucket will create, oldest hot bucket will roll from hot to warm), if you do not hit 10 hot buckets in that case bucket will only roll when splunk will restart or maxDataSize or maxHotSpanSecs whichever hit first.

In your case it looks like bucket with ID 134 created but didn't hit maxDataSize or maxHotSpanSecs and this bucket will only roll when you will restart splunk or more data will be ingested in this bucket & when it reaches maxDataSize or maxHotSpanSecs, if you do not restart splunk or bucket will not reach maxDataSize or maxHotSpanSecs in that case it will sit as Idle hot bucket and by default maxHotIdleSecs setting is 0 which means infinite time (A value of 0 turns off the idle check). In this case either you need to fix timestamp issue if you have timestamp recognition problem on splunk or if timestamp recognition is correct but data is very old then you can set maxHotIdleSecs to few days (For example : 7 or 14 days) and after this days if hot bucket will not receive any events then it will roll from hot to warm. Once this bucket with ID 134 will convert from hot to warm it will immediately remove because it already reached frozenTimePeriodInSecs .

View solution in original post

harsmarvania57
Ultra Champion

Hi @jadengoho,

It looks like you have timestamp issue or you are ingesting very old logs into Splunk and due to that Splunk is creating multiple hot buckets.

And based on my knowledge those buckets will roll from Hot to Warm when maxDataSize or maxHotSpanSecs whichever hit first and when you have more than 10 hot buckets based on your configuration (which means when 11th hot bucket will create, oldest hot bucket will roll from hot to warm), if you do not hit 10 hot buckets in that case bucket will only roll when splunk will restart or maxDataSize or maxHotSpanSecs whichever hit first.

In your case it looks like bucket with ID 134 created but didn't hit maxDataSize or maxHotSpanSecs and this bucket will only roll when you will restart splunk or more data will be ingested in this bucket & when it reaches maxDataSize or maxHotSpanSecs, if you do not restart splunk or bucket will not reach maxDataSize or maxHotSpanSecs in that case it will sit as Idle hot bucket and by default maxHotIdleSecs setting is 0 which means infinite time (A value of 0 turns off the idle check). In this case either you need to fix timestamp issue if you have timestamp recognition problem on splunk or if timestamp recognition is correct but data is very old then you can set maxHotIdleSecs to few days (For example : 7 or 14 days) and after this days if hot bucket will not receive any events then it will roll from hot to warm. Once this bucket with ID 134 will convert from hot to warm it will immediately remove because it already reached frozenTimePeriodInSecs .

jadengoho
Builder

Thanks for this:
I do restart he indexers and do a rolling restart and after that, some of the buckets are rolled and others are being frozen.
Yes, we are still facing a timespan issue due to logs are hard to understand.

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...