Hi everyone,
Long story in short.
I am planning to migrate our Splunk Cluster from public cloud to on-prem with all the old data existing in the cloud, but transfer them from local storage to smart store, the new data will be streaming to the on-prem cluster with all the configuration (index name, users, apps, reports, alerts, dashboard, etc) unchanged, and we will keep the minimum "in-cloud cluster" up and running until the data aged out. that's why we want to move the data from local storage to smart store for cost saving
Now, I have two requirement:
1 rename the index name when data is migrated to smart store, this will be used in case we need to "hook up" it with our new on-prem cluster, so we need the index name to be different then their previous name.
2 we have a few indexes were configured "maxDataSize = auto_high_volume", from smartstore document, it seems that we can only use "maxDataSize=auto", even if we re-config this to "auto", it won't re-size the existing buckets from 10G to 750M, my question is is there any way for us to just move these bucket into the smartstore, the purpose for us is just to retain these data until they expire, there won't be active search on these data.
Thank you