Splunk Enterprise

Smartstore, how to set max data size per index

mufthmu
Path Finder

Hi fellow Splunkers,

As parameter maxTotalDataSizeMB is only available for non-Smartstore indexes, what parameter in Smartstore index that replace this parameter? I want to start evicting bucket when certain size is reached. I could only find hotlist_recency_secs which evict data by age, not size (per index).

Thanks!

Reference: https://docs.splunk.com/Documentation/Splunk/8.1.0/Admin/Indexesconf

 

Labels (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Correct, there are no per-index settings for SmartStore.  S2 caches whatever indexes it needs to meet search requirements.

There are per-index size limit settings, but they do not apply when using SmartStore.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Check out  maxGlobalDataSizeMB and  maxGlobalRawDataSizeMB.

The hotlist_recency_secs setting partially controls the S2 cache.  That's different from controlling the data of the S2 database itself. which is what the other two settings above do.

---
If this reply helps you, Karma would be appreciated.
0 Karma

mufthmu
Path Finder

Thanks @richgalloway 

But I read the following for maxGlobalDataSizeMB:

* This setting includes the sum of the size of all buckets that reside
  on remote storage, along with any buckets that have recently rolled
  from hot to warm on a peer node and are awaiting upload to remote storage.

 This parameter includes all warm buckets in local storage AND remote storage. Meaning that if I set this parameter to 200GB, I can only have data for maximum 200GB in BOTH local and remote storage (correct me if I'm wrong) for that index. I'm more interested in the parameter that only includes local storage, so that when that value is reached, the cache manager will begin evicting buckets to remote storage.

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Recall that warm buckets do not exist locally.  If a warm bucket is found on local storage it is because it is waiting to be transferred to remote storage.  Local buckets are counted because space for them needs to be accounted for on the remote store.

To set the size of the SmartStore cache, use max_cache_size in server.conf.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/Indexer/ConfigureSmartStorecachemanager#Initiate_...

---
If this reply helps you, Karma would be appreciated.
0 Karma

mufthmu
Path Finder

@richgalloway  I see.

So there is no parameter for Smartstore that limit data size locally by index, am I correct?

maxGlobalDataSizeMB = includes remote storage. (I only want to evict bucket in my S3 by age, not size)

max_cache_size = this is cache manager as a whole, not per index.

 

PS: It will be perfect for my situation if maxGlobalDataSizeMB does not include remote storage.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Correct, there are no per-index settings for SmartStore.  S2 caches whatever indexes it needs to meet search requirements.

There are per-index size limit settings, but they do not apply when using SmartStore.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...