Splunk Search

Reduce index period for old Index

trkswe
New Member

Hi All,

We had an index named axo, which is around 3 years old and had around 300 GB of data.
Now we have decided to reduce the index size, by retaining only the latest 90 days of data.

We have updated the "frozenTimePeriodInSecs = 7776000" in /opt/splunk/etc/system/local/indexes.conf.
We also ran btool command (./splunk cmd btool indexes list) to see if there are mutiple .conf files.
But in the btool result as well, we observed "frozenTimePeriodInSecs = 7776000" was correct.

When we do the search, we still see the old data of the past 2 years.

Is the method of reducing the size of index correct?
Do we need to follow any other method? Please guide.

PS: "maxHotSpanSecs = 7776000"

Thank you.

0 Karma
1 Solution

DavidHourani
Super Champion

Hi @trkswe,

Changing the frozenTimePeriodInSecs does purge the older logs but if you have old and new logs mixed up in the same bucket of your index then those buckets will only expire once the newer data is older than 90 days.
This means if you've indexed data from 2 years at the same time as data from a period <90 days the data will not be purged before the most recent data goes over 90 days. This usually happen when you index multiple days/months/years of log at the same time.

Have a look at this wiki it will help : https://wiki.splunk.com/Deploy:SplunkBucketRetentionTimestampsAndYou

PS: It's best practice to avoid using system/local for configurations, try making an app and putting your configs there instead it will be easier to maintain and manage.

Cheers,
David

View solution in original post

0 Karma

DavidHourani
Super Champion

Hi @trkswe,

Changing the frozenTimePeriodInSecs does purge the older logs but if you have old and new logs mixed up in the same bucket of your index then those buckets will only expire once the newer data is older than 90 days.
This means if you've indexed data from 2 years at the same time as data from a period <90 days the data will not be purged before the most recent data goes over 90 days. This usually happen when you index multiple days/months/years of log at the same time.

Have a look at this wiki it will help : https://wiki.splunk.com/Deploy:SplunkBucketRetentionTimestampsAndYou

PS: It's best practice to avoid using system/local for configurations, try making an app and putting your configs there instead it will be easier to maintain and manage.

Cheers,
David

0 Karma

trkswe
New Member

Thanks a lot.

Confirmed with the Analyst. Your assumption was right.
The old data was injested a few months ago.

Thanks for the tip on best practice as well.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...