Splunk Enterprise

Why is the age of the data larger than the frozenTimePeriodInSecs time without being deleted?

vumanhtai
Path Finder

Hi Splunk Team

Why is the age of the data larger than the frozenTimePeriodInSecs time without being deleted

vumanhtai_0-1594096169238.png

My config index is as follows

frozenTimePeriodInSecs = 38880000

Thanks

 

Labels (2)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Buckets are rolled to frozen when all events have at least frozenTimePeriodInSecs old. When there are some “newer” and “older” events on the same individual bucket it has rolled to frozen when the newest event has enough old.

r. Ismo

View solution in original post

0 Karma

jordanking1992
Path Finder

My concern is that if enough indexes are storing the data longer than the expected retention, do we rely on maxVolumeSize to start deleting events if the disk starts to fill up?

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Buckets are rolled to frozen when all events have at least frozenTimePeriodInSecs old. When there are some “newer” and “older” events on the same individual bucket it has rolled to frozen when the newest event has enough old.

r. Ismo

0 Karma

jordanking1992
Path Finder

Does this mean that eventually the bucket will move to frozen when the "newest" event is more than the the frozentimeperiodinsec setting?


Is there a way to prevent this behavior so that all indexes have the data age of the frozentimeperiodinsecs setting?

 

Thanks,

J

0 Karma

MiniNenya
Engager

I know this is an old topic, but it took me long time to understand it, so I guess it's worth helping a little 🙂

There's no direct approach; however if you do some digging you can come to an acceptable solution:

I also found that my indexes were keeping data above the FrozenTimePeriodInSecs; that's because, if the ingestion rate is not very high, some buckets can contain data belonging to more than one day of ingestion, and therefore those buckets won't be frozen until the most recent event reaches the FrozenTimePeriodInSecs limit. If one bucket has, say a whole month's data, by the time it's frozen it will be exceeding the FrozenTimePeriodInSecs by a month.

What I did was study the average amount of data ingested by each index (in my case, around 0.5GB) and configure maxDataSize to this value; this way each hot bucket will be at most 0.5GB, and it will contain data from just one day. 

You'll find that Splunk criteria for bucket creation is not obvious; sometimes it creates, for the same date, a 45MB bucket and another one of 123MB (it's just an example) and I don't understand why, but the important thing is that this makes the rotation much more "agile", since buckets are inmediately deleted when they reach the FrozenTimePeriodInSecs limit.

MiniNenya
Engager

I forgot to mention that, alternatively, you can configure your hot buckets to roll to warm based solely on their age with the parameter maxHotSpanSecs.

Hope it helps!

 

isoutamo
SplunkTrust
SplunkTrust

Hi

It works just like that. When the newest event has enough old then the whole bucket will be moved to frozen.

At least I don't know that kind of feature. When you are thinking how those events are stored into buckets, you probably understand how hard and impossible that kind of process will be. Of course you could try to avoid it with planning of your indexes e.g. what data to which index etc. and ensure that you haven't any older data (e.g. start collect some new hosts, which contains old data) on same indexes.

r. Ismo

Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...