When a bucket exceeds the configured data retention time and the parameter remote.s3.supports_versioning=true, then my understanding is that SmartStore will put a delete marker on the corresponding bucket that gets frozen and this data/bucket is ignored by SmartStore for any subsequent searches.
I'm seeing that the bucket gets completely deleted with no delete marker. I wanted to make sure that there's no other configuration that needs to be done other than:
Enable versioning on the S3 bucket
Ensure that remote.s3.supports_versioning=true (default)
When remote.s3.supports_versioning = true , we iterate over all versions of an S3 object (file) and remove all versions. Otherwise, we do a simple remove on the object. This means that if set to true, all versions will be removed and the object contents are irretrievable.
If set to false, the behavior is as follows:
1) if bucket versioning is disabled, the object is simply gone forever;
2) if bucket versioning is enabled, the "remove object" operation simply puts a delete marker on top. Keep in mind that the delete marker is not explicitly put by us. Whether there will be a delete marker depends on whether bucket versioning is enabled and on the method of removal.
There is nothing in Splunk about versioning. It's at the storage level. Splunk only does
1) "simple" object removal or
2) removal of all versions of an object, depending on the configuration.
remote.s3.supports_versioning = <boolean>
* Specifies whether the remote storage supports versioning.
* Versioning is a means of keeping multiple variants of an object
in the same bucket on the remote storage.
* Default: true
Hence I would expect a delete marker in place when an object is deleted. Can you clarify?