Currently doing a SmartStore POC. The goal is to send only the frozen data to s3 but for an unknown reason (to me), the entire index was sent to S3....
Here is my indexes.conf:
[prometheus_dock]
coldPath = $SPLUNK_DB/prometheus_dock/colddb
datatype = metric
homePath = $SPLUNK_DB/prometheus_dock/db
maxTotalDataSizeMB = 256000
thawedPath = $SPLUNK_DB/prometheus_dock/thaweddb
maxHotSpanSecs = 86400
frozenTimePeriodInSecs = 648000
remotePath = volume:s3/prometheus_dock
Index size in Splunk: 7.75GB
When I query aws s3 to get bucket size:
```
ws s3 ls --summarize --human-readable --recursive s3://splunkdockbucket/
Total Objects: 425
Total Size: 7.7 GiB
```
How to configure indexes.conf to only archive frozen data...?
Smartstore is a solution for sending warm/cold data to S3. It is designed to still make the data able to be searched.
If you want to do frozen Data, take a look at the discussions here: https://answers.splunk.com/answers/56522/frozen-archives-into-amazon-s3.html#answer-351891
All the best 🙂
Thank you @chrisyoungerjds. I guess SmartStore is working perfectly than. tks. 🙂
Smartstore is a solution for sending warm/cold data to S3. It is designed to still make the data able to be searched.
If you want to do frozen Data, take a look at the discussions here: https://answers.splunk.com/answers/56522/frozen-archives-into-amazon-s3.html#answer-351891
All the best 🙂