> I don't want to send data to remote storage and then bring it back onto the indexer for archiving locally. And: > When a bucket rolls from warm to frozen, cache manager will download the warm bucket from the indexes prefix withing the S3 bucket to one of the indexers, splunk will then take path to the bucket and pass it to the cold to frozen script for archiving which places the archive in the S3 bucket under archives. Can you elaborate a bit on this? At first you mention you don't want to upload to S3 and then download for archiving locally, but that appears how you solved the problem. I see that archives go back to S3, so that's not archiving locally in terms of where archives get stored, but it's archiving locally in terms of where it happens (as in, you still pay S3 egress fees, which I thought was the main reason for coming up with a workaround/solution). Wouldn't it be better to just leave them there? > When archiving is successful, cache manager will delete the local and remote copies of the warm bucket. Your data is eventually still on S3 and it would be evicted from cache for good (presumably no one needs to search it, which is why it's being archived), so what's the benefit? > Smartstore will roll the buckets to frozen by default unless you set frozen time to 0 which will leave all warm buckets in S3. I didn't want that as a long term solution I wonder why. I like the creative approach, but I'm curious about the non-technical value (cost, special use case, business rules, something else) you get in return for the possibly additional/unnecessary egress fees.
... View more