We, up to now, have never frozen data. However, we have a requirement now to freeze some data for years.
I need to show in a development environment how this works.
I have created a new index. Defined coldToFrozenDir and set frozenTimePeriodInSecs to 600 (10 mins).
I have created input for a text file and filled it with about 100k lines of data.
The data is being successfully indexed
The directory was created, but there is no frozen data.
I suspect it's because the data is still hot.
Is there a way to force data through the bucket cycle so I can see it show up frozen?
tried your settings on my laptop, and wrote a scheduled search that runs every 5 minutes and does that:
index = _internal | head 1000 | collect index=timtest"
try and run this search to see if its working:
index=_internal sourcetype=splunkd component=BucketMover freeze
works fine on my end
coldPath = $SPLUNKDB/timtest/colddb
homePath = $SPLUNKDB/timtest/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNKDB/timtest/thaweddb