Deployment Architecture

Is there a maximum time frame for cold and frozen buckets?

Explorer

I've searched through the docs to find out if there is a max setting in splunk for the bucket retention policy and I have not come across anywhere that says a max time frame you can set within the indexes.conf file.

I'm aware of the option where you can set the time frame in seconds with:

frozenTimePeriodInSecs =

There is an audit requirement where our logs need to be stored for 7 years in remote storage.
Does anybody know if there is a max time limit for buckets and has anyone made a policy to keep frozen logs for multiple years?

0 Karma
1 Solution

Splunk Employee
Splunk Employee

You could write an archiving script to send the frozen buckets to your remote storage. See the Archive indexed data topic in the Managing Indexers and Clusters of Indexers manual, as well as the sample script in $SPLUNK_HOME/bin/coldToFrozenExample.py.

View solution in original post

Communicator

Have the same requirement. Just set frozen time period and maxtotaldatasizemb to some very large value ( greater than physical disk you have ). Then nothing will get deleted until frozen time period which in your case is 7 years.

Splunk Employee
Splunk Employee

You could write an archiving script to send the frozen buckets to your remote storage. See the Archive indexed data topic in the Managing Indexers and Clusters of Indexers manual, as well as the sample script in $SPLUNK_HOME/bin/coldToFrozenExample.py.

View solution in original post

Explorer

thanks for this. It should be useful for what I'm looking for.

I also need to keep my logs in the hot bucket for 6 months, which option would you recommend within the indexes.conf file to do?

0 Karma