We migrated almost all of our existing indexes from traditional indexes with separate warm and cold mount paths to smartstore a little under a year ago.
It's all worked great, however for indexes with long term retention, buckets that were in the coldPath at the time of smartstore converstion continue to be stubbed out and localized from S3 back into the coldPath, while everything since conversion uses the warm path, as expected since that mount is the SPLUNK_DB definition used by the smartstore indexes.
I want to re-map the SPLUNK_COLD path to use the same OS mount, but what is the supported way to do that with smartstore?
From the documentation (https://docs.splunk.com/Documentation/Splunk/7.3.3/Indexer/Moveanindex) it sounds like you would normally manually copy the data from the old to the new path, and then re-map the variable, however with smart store does it work the same? Or is it just something like force clearing the smartstore cache on the OS mount I want to clear off, re-mapping the variable, and then new localization of buckets simple uses the re-mapped path?
- The coldpath is needed during migration when pre-existing data is migrated to SmartStore. - As discussed in our documentation “Cold buckets can, in fact, exist in a SmartStore-enabled index, but only under limited circumstances. Specifically, if you migrate an index from non-SmartStore to SmartStore, any migrated cold buckets use the existing cold path as their cache location, post-migration. In all respects, cold buckets are functionally equivalent to warm buckets. The cache manager manages the migrated cold buckets in the same way that it manages warm buckets. The only difference is that the cold buckets will be fetched into the cold path location, rather than the home path location” coldPath and homePath can point to the same volume, but different directories like.