My Splunk is a single Splunk 6.5.x instance, which needs to retain the last 30 days events, so I configured frozenTimePeriodInSecs = 2592000 in indexes.conf. But it does not work fine all the time.
What I could tell is my indexes keep growing, and search with "latest=-30d" shows up some events sometimes. When the index size reaches the maximum index size which was configured in the index creation, or when I restart Splunk instance, the index size decreases to nearly half of the max index size.
Is there any idea of why there is so significant delay for Splunk purging old events? and how to fix it?
Based on documentation of indexes.conf, index will remove data from index based on 2 parameters
maxTotalDataSizeMB whichever hit first.
Now splunk stores data in hot, warm and cold buckets. In your case when you set
2592000 it will remove those warm or cold bucket which will have all events older than
IMPORTANT: Every event in the DB must be older than frozenTimePeriodInSecs before it will roll.
So let's day one of the bucket contains earliest event 45 days older and latest event is 25 days older then this bucket(DB) will not remove and when you will search you will able to search data older than 30 days from that bucket, this bucket will remove when all events in that bucket are older than
Now when you restart splunk it will roll hot bucket to warm and warm to cold based on your indexes.conf configuration and in this case if any hot bucket contain events older than 30 days then it will roll hot bucket to warm and then immedeiatly remove that bucket and due to that your index size decrease suddenly.
event deletion is managed at bucket level, so when the latest event of a bucket is out of retention period bucket will be frozen or deleted.
This means that you can have online some events older that the retention period.