Lets assume that I have 20 node Indexer cluster deployments. Enabled smart store and migration started but due to a network issue the migration was interrupted. We are trying to figure out the best bay to restart a migration. How can that be done? What is role of file .buckets_synced_to_remote_storage in migration?
At startup , if an index is s2-enabled, splunk check to see if bucket need to be uploaded. To check if buckets need to be uploade splunk look for file $SPLUNK_DB/var/lib/splunk//db/.buckets_synced_to_remote_storage exists. The presence of this file indicates that splunk don't need to upload files to the remote storage and therefore no migration needs to happen.
At startup there are no buckets on the system , or creation of new index.
During migration upload to the remote storage. A migration upload will start on S2 enabled indexes if there are buckets in the index. The "migration upload" will first do a bulk-add of all existing buckets to the CacheManger. The buckets will be added in randomized order. Then it will touch a $SPLUNK_DB/var/lib/splunk/.buckets_synced_to_remote_storage file, and then it will do registerExisting on the CacheManager, which will trigger the upload jobs to begin.
Before uploading each individual bucket the CacheManager will check if that bucket exists on the remote storage first.
find $SPLUNK_DB -type f -name .buckets_synced_to_remote_storage ./var/lib/splunk/audit/db/.buckets_synced_to_remote_storage ./var/lib/splunk/_internaldb/db/.buckets_synced_to_remote_storage ./var/lib/splunk/_introspection/db/.buckets_synced_to_remote_storage
In case the migration had to be restart, you will need to shutdown the indexers delete .buckets_synced_to_remote_storage from all indexer. Upon restart migration will restart
Do NOT remove this file unless you also change the indexer GUID using splunk clone-prep-clear-config or it will possible corrupt previously uploaded buckets that have been evicted from the local cache.
clone-prep-clear-config is safe to use because it emulates a new indexer being added to the cluster and it will not upload buckets that already exist that were uploaded by a different indexer.
If there is any possibility that your data on the indexer is not 100% complete due to eviction, you MUST change the guid when removing .buckets_synced_to_remote_storage