Answering my own question with what I have found, hopefully to help others.
Since Splunk normally deletes data that is going to be frozen, I wanted to keep the data in the on-prem s3 store. To do this I found you need to specify the FrozenTime and a script that performs the work.
The cachemanager calls this script with a variable of the warm bucket that needs to be 'frozen'. In my very simple script, I simply copy that to an S3 target with aws cli. This process isn't designed for scalability, and I'm copying the whole bucket rather than tar/gz the contents. All of this can be done with a more exhaustive script.
In indexes.conf either in general or in the specific stanza you need the following:
[scality]
coldToFrozenScript = "/bin/bash" "/opt/splunk/bin/coldToFrozenS3.sh"
frozenTimePeriodInSecs = 86400
And the script it's calling is extremely simple, like the following:
#!/bin/bash
set -e
set -u
bucket=$1
warm=`echo $1 | cut -f9 -d"/"`
echo "bucket to move: " $bucket >> /var/log/splunkToS3.log
/usr/bin/aws --profile default --endpoint-url http://[IP]:[port] s3 mv $bucket s3://s3-bucket/scality/frozen/$warm --recursive 2>&1 >> /var/log/splunkToS3.log
... View more