Deployment Architecture

Re-indexing aged-out data process?


Dear All,

We have a clustered-index Splunk 6.3 system where the administrators set the frozen time to a very low value (7 days) as they thought that this was the only amount of time required for the data retention, so the data that is over 7 days old is all gone from the index. The first thing that we did was to increase the frozen data time period and so the data is not being aged out any more, gladly.

Gladly, though, somehow, the RAW data files for the last month have been discovered and we need to re-index this data.

I am guessing that the fishbucket on the Forwarder will stop the Forwarder from sending the data and I have found this rather useful Answer which tells me that I can use the btprobe command to delete the entry from the fishbucket for an individual file:

Which shows me:

./btprobe -d SPLUNK_HOME/var/lib/splunk/fishbucket/splunk_private_db --file /var/log/access.log --reset

However, if I run this for a file on the Forwarder, will the Indexers pick this up again and re-index the data, as the data has already been aged out and therefore, there is no record of it or do I need to do the same on the Indexers as well?

Splunk Employee
Splunk Employee

Yes, after your run btprobe...-reset at UF, and restart the UF, without changing anything in inputs.conf at UF, UF re-monitor the file. Indexer does not know events were indexed in the past. So, indexer simply indexes any data coming from UF.


Hi BlueSocket,
You could create a new monitor stanza to reindex only the old data using the crcSalt = <SOURCE> option in inputs.conf file and when you'll finish you can delete the stanza.
In this way you can rendex these data without modify fishbucket.

0 Karma
.conf21 Now Fully Virtual!
Register for FREE Today!

We've made .conf21 totally virtual and totally FREE! Our completely online experience will run from 10/19 through 10/20 with some additional events, too!