Getting Data In

How to troubleshoot why frozenTimePeriodInSecs is not taking effect?


alt textalt text

We have this config set up in indexes.conf, but still the data seems to be present even after 365 days.. anything which can be checked?

homePath = volume:hot/nmon/db
coldPath = volume:cold/nmon/colddb
thawedPath = $SPLUNK_DB/nmon/thaweddb
coldToFrozenDir = $SPLUNK_DB/nmon/frozendb
repFactor = auto
frozenTimePeriodInSecs = 31536000
rotatePeriodInSecs = 60


0 Karma

Super Champion

Hi, may i know how you created this screenshot (the search query or any app or..)

0 Karma


its from an app called DMC installed on the cluster master.
ours is a disributed environment.

0 Karma


Data is rolled to frozen when the newest/latest data, and not the oldest, in that particular bucket is OLDER than frozenTimePeriodInSecs.

So you WILL have data older than those date settings. This is expected behaviour.

If it was to delete the bucket when the first event hits that timer than it would be potentially deleting other events before that time.

edit: I couldn't see the picture you attached very well but i've just had a look by right clicking and opening in another window.

hmm, for small indexes I could understand why there might be events near newer events that prevent deletion.
However your np_aap index should easily have enough data in enough separate buckets that it should be deleting events.

Check your _internal logs and see if any Freeze Async messages are triggering anywhere. I'd guess they probably aren't.

edit2: you have a cold to frozen dir configured. Is any data showing up in those dir's?

Splunk Employee
Splunk Employee

To add to Lucas' answer, which is right on the money, there are some things you can do to control the roll of your data through the various bucket stages.

see indexes.conf.spec for more info:

you can control the bucket sizes with:

maxDataSize =auto OR auto_high_volume

to ensure the buckets are sized according to the volume of data the index receives.

you can control your bucket spans with:

maxHotSpanSecs = 60401

which would ensure ur buckets span no more than a day, allowing you to satisfy time based retention policies. [Edit - actually because your index doesn't look like its a high volume index you could extend this out to a week, then you could satisfy 365 days within 7 days and keep ur buckets down]

The bucket span is what will be important to you to ensure you can roll entire buckets once they eclipse 365 days.

In the meantime you could look at:

quarantinePastSecs and quarantineFutureSecs to avoid buckets getting polluted with "fringe events"

Edit: @Lucas K, no you are right, the config stanza is for the nmon index which appears to be low volume


@Lucas K @mmodestino_splunk

Thanks for the reply. so as I understand
1> so even if the data in the buckets is older than 365 days if the data in that same bucket is new it wont rioll the bucket to frozen, is this correct?

2> like np_aap we have AAP_PROD index for prod index and it has very heavy volumes of data(license usage report says avg of 150gb, but this above images shows it only adding 20gb per index, we have 4 indexers)
this shows data since 2002 / 365 (see image)
for AAP_PROD index I see data from march 9th till today. March 9th was the day when we built this new clustered infrastructure.

Freeze Async Message in Internal Log looks like this, dont know what to search for ...

08-20-2016 23:32:32.228 -0700 INFO  StreamedSearch - Streamed search connection terminated: search_id=remote_p01apl.ent.com_1471761143.19590_12B16A82-0C9F-415D-A8A8-AEEB96B9AA2B,, active_searches=6, elapsedTime=8.753, search='litsearch index=_internal idex=aap_prod "Freeze Async" | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100', savedsearch_name=""

Thanks again for looking into this.

0 Karma


That log entry you pasted is you looking at yourself 🙂

Its your search that is looking for "freeze".

Your search should look like this.

index=_internal host=myindexers* Freeze source="/opt/splunk/var/log/splunk/splunkd.log"

change the host to your indexer hosts.

The freeze events you are looking for should look like the following.

08-23-2016 04:08:16.727 +0000 INFO BucketMover - AsyncFreezer freeze succeeded for bkt='/opt/splunk/var/lib/splunk/main/db/db_1392090887_1302269766_1611'
08-23-2016 04:08:16.701 +0000 INFO BucketMover - will attempt to freeze: candidate='/opt/splunk/var/lib/splunk/main/db/db_1392090887_1302269766_1611' because frozenTimePeriodInSecs=40176000 is exceeded by the difference between now=1471925295 and latest=1392090887
08-23-2016 03:53:16.134 +0000 INFO BucketMover - AsyncFreezer freeze succeeded for bkt='/volumes/cold/defaultdb/colddb/db_1428794249_1428794249_417'

In those examples you can see that a freeze succeeded for a specific bucket name and its full file path is shown also.
You can also see the attempts and the reasons why it was rolled to frozen.

0 Karma


@Lucas K @mmodestino_splunk added a new image

0 Karma

Splunk Employee
Splunk Employee


1> so even if the data in the buckets is older than 365 days if the data in that same bucket is new it wont rioll the bucket to frozen, is this correct?

^^That is correct. All event's timestamps (not indextime) must be older than the frozenTimePeriodInSecs.

0 Karma