Getting Data In

How to freeze and recover older files in an index


I am fairly new to splunk and have been trying to piece together my understanding of things via the numerous answers in the splunk knowledge base. However at this point in time I'm honestly clueless on how to fix this issue:

I have a sizeable block of old logs (100+ GB) going back about two to three years. After noticing that these old logs were not being archived I began consulting the KBs to find a solution.

So far my understanding is as follows:
1. To instruct splunk to archive files automatically, you need to give the index two things to reference:
a. coldToFrozenDir = this is a location where splunk will toss cold logs, this is an archive for frozen logs that can be thawed should the need arise.
b. frozenTimePeriodInSecs= this is the length of time (in seconds) a log will sit before splunk sends it to the archive above.

With my very limited knowledge in mind here's what I THINK I need to do:
1. Stop splunk
2. Create a frozen directory here:
3. Create a new conf file in the $SPLUNK_HOME\etc\system\local directory, including the new frozen directory as the coldtofrozenDir.
4. To test set frozenTimePeriodInSecs back, not quite to my goal but far enough to archive some of the oldest stuff.
5. Once these changes were save in my new conf file and start splunk

Is this accurate? If I adjust my index to include a cold-to-frozen directory, then specify a limit to the age of my logs, will this help me clear out some of my older stuff without deleting anything?

Once files appear in the frozen directory to retrieve them do I simply:
1. Copy from frozen to thawed
2. navigate to the CLI and run the splunk rebuild command?

I realize many of these questions may seem a bit simple but I really could use a bit of info on how to solve this.

0 Karma


You have the general idea. Allow me to make a few corrections.

Splunk freezes buckets, not logs. A bucket is a component of an index and may contain multiple "logs". Buckets do not freeze until the newest event in it is older than frozenTimePeriodInSecs.

The steps:
1. Create a frozen directory for each index
2. Add coldToFrozenDir, frozenTimePeriodInSecs. thawedPath, and maxHotSpanSecs=86400 to $SPLUNK_HOME/etc/system/local/indexes.conf or (better) $SPLUNK_HOME/etc/apps/my_indexes/default/indexes.conf. maxHotSpanSecs makes sure no bucket contains more than a day's events.
3. Restart Splunk

Be aware that frozen buckets are completely forgotten by Splunk. To search a frozen bucket, you must move it to the appropriate thawedPath and rebuild the index. Splunk does not manage thawed buckets so it's up to you to remove it when it's no longer needed.

If this reply helps you, an upvote would be appreciated.
0 Karma


Thanks!!!! 😄

Now with regard to rebuilding an index:
If I manually move a frozen file to the thawed directory and use the splunk rebuild command from the CLI I am getting a --bucket-path is not an extant directory message.

My syntax is as follows:
--bucket-path is not an extant directory.
1. The syntax I am using is as follows: splunk rebuild (for example: splunk rebuild D:\program files\splunk\var\lib\splunk\defaultdb\thaweddb\db_132305398923_34)

Is there something in my syntax I'm missing?


It's Windows so you probably need quotation marks around the file path\name.

If this reply helps you, an upvote would be appreciated.


You might also find my post here with restor.ps1 useful. The code is intended to help you automate restores:

0 Karma