Monitoring Splunk

Smartstore S3 data retention period

mufthmu
Path Finder

Hello fellow Splunkers,

I have 2 questions regarding Splunk Smartstore's cachemanager:

1. How do I make sure that my cachemanager is large enough to hold all warm buckets for 30 days? My daily license usage is about 15-20GB/day and my Splunk is hosted in the cloud.

2. What config/parameters are responsible if I want to delete data that were archived in my remote storage (S3) if their age reach 60 days since the day they got archived OR 90 days since the data creation?

for example: I set my frozenTimePeriodInSecs to be 90 days. But just after 30 days the data leave cache manager and will be archived in S3. So when that same data reach their 90th day, how can cache manager freeze/delete that data if it's sitting in S3 bucket?

Thanks!

Labels (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

1. This is out of your hands.  Splunk Cloud will manage the cache for you.  

2. The buckets stored in S3 are managed for you by Splunk.  Despite the location, they're still Splunk buckets and still subject to the normal retention mechanism.  Don't do anything to them yourself.

---
If this reply helps you, an upvote would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

1. This is out of your hands.  Splunk Cloud will manage the cache for you.  

2. The buckets stored in S3 are managed for you by Splunk.  Despite the location, they're still Splunk buckets and still subject to the normal retention mechanism.  Don't do anything to them yourself.

---
If this reply helps you, an upvote would be appreciated.

View solution in original post

.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!