Splunk Search

Reduce index period for old Index

trkswe
New Member

Hi All,

We had an index named axo, which is around 3 years old and had around 300 GB of data.
Now we have decided to reduce the index size, by retaining only the latest 90 days of data.

We have updated the "frozenTimePeriodInSecs = 7776000" in /opt/splunk/etc/system/local/indexes.conf.
We also ran btool command (./splunk cmd btool indexes list) to see if there are mutiple .conf files.
But in the btool result as well, we observed "frozenTimePeriodInSecs = 7776000" was correct.

When we do the search, we still see the old data of the past 2 years.

Is the method of reducing the size of index correct?
Do we need to follow any other method? Please guide.

PS: "maxHotSpanSecs = 7776000"

Thank you.

0 Karma
1 Solution

DavidHourani
Super Champion

Hi @trkswe,

Changing the frozenTimePeriodInSecs does purge the older logs but if you have old and new logs mixed up in the same bucket of your index then those buckets will only expire once the newer data is older than 90 days.
This means if you've indexed data from 2 years at the same time as data from a period <90 days the data will not be purged before the most recent data goes over 90 days. This usually happen when you index multiple days/months/years of log at the same time.

Have a look at this wiki it will help : https://wiki.splunk.com/Deploy:SplunkBucketRetentionTimestampsAndYou

PS: It's best practice to avoid using system/local for configurations, try making an app and putting your configs there instead it will be easier to maintain and manage.

Cheers,
David

View solution in original post

0 Karma

DavidHourani
Super Champion

Hi @trkswe,

Changing the frozenTimePeriodInSecs does purge the older logs but if you have old and new logs mixed up in the same bucket of your index then those buckets will only expire once the newer data is older than 90 days.
This means if you've indexed data from 2 years at the same time as data from a period <90 days the data will not be purged before the most recent data goes over 90 days. This usually happen when you index multiple days/months/years of log at the same time.

Have a look at this wiki it will help : https://wiki.splunk.com/Deploy:SplunkBucketRetentionTimestampsAndYou

PS: It's best practice to avoid using system/local for configurations, try making an app and putting your configs there instead it will be easier to maintain and manage.

Cheers,
David

0 Karma

trkswe
New Member

Thanks a lot.

Confirmed with the Analyst. Your assumption was right.
The old data was injested a few months ago.

Thanks for the tip on best practice as well.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...