Splunk Tech Talks
Deep-dives for technical practitioners.

Spend Less and Get More Out of Splunk with SmartStore

Splunk Employee
Splunk Employee

View our Tech Talk: Platform Edition, Spend Less and Get More Out of Splunk with SmartStore

Are you managing Splunk and concerned about rising infrastructure costs as you scale your use? If you want to make your infrastructure investments work harder for you, this Tech Talk is for you. Join our Splunk Tech Talk on the Splunk SmartStore that allows you to decouple compute and storage and take advantage of external data storage solutions like S3 and S3 API compliant object stores.

Tune in to:

  • Learn how to make the most of your infrastructure investments with Splunk 
  • Engage in a live demonstration of how to manage your Splunk environment with SmartStore
  • Get access to the latest resources on SmartStore in Splunk
Splunk Employee
Splunk Employee

Here is the Q&A from the live session.

Q: Say I run a search, but the data is not in cache. So it downloads from the S3. But wouldn't downloading take extra time and slow my search?
A: Yes, if not in your cache there could be a time delay. If you have frequent need to run rare searches, SmartStore might not be appropriate for your purposes, as rare searches can require the indexer to copy large amounts of data from remote to local storage, causing a performance impact. This is particularly the case with searches that cover long timespans. If, however, the searches are across recent data and thus the necessary buckets are already in the cache, then there is no performance impact.
Q: Are there plans to integrate SmartStore with Azure?
A: Sounds like there are plans for it to be part of the roadmap, but the timeline is currently TBD.
Q: We have a requirement for saving 7 years of indexed data. Only latest 180 days of data is frequently used rest is only used very rarely, so we would like use "Amazon S3 Intelligent-Tiering" instead of "Amazon S3 Standard" for smart-store. Does Splunk support it ? If not is there any plan to support in future.
A: SmartStore only supports Amazon S3 Standard storage class. 
Q: How do you mitigate cache blowout caused by high scan rate deep searches?
A: In general, the SmartStore cache manager ensures the cache is optimized for most recent data and frequently searched data. In cases, where there is a need for running searches with data from long time ranges, there is a potential for cache being full. If this is not a frequent scenario, we should be fine, but if it is a recurring scenario (e.g deep searches running on a frequent schedule), we would need to adjust some of the SmartStore settings to fit the needs.  
 Please find the below link configuring Smartstore cache manager and troubleshooting SmartStore issues
Q: Do I need one AWS S3 bucket for all Indexers and indexes? Or what is the formula here?
A: Yes, as part of the SmartStore setup, we need to create corresponding buckets in AWS S3 or S3-API complaint object stores. 
Below is a general formula for computing the storage needs

Remote Object Store sizing = Daily Ingest Rate x Compression Ratio x Retention period

Compression ratio is generally 50%

Please find the below link for configuring SmartStore
Splunk Employee
Splunk Employee

Here are additional resources to continue on your journey.


Training and Certification : Explore trainer led sessions for the Admin on how to implement Splunk Smartstore

SmartStore Documentation: Get step by step guidance with Splunk docs

Explore .conf materials online

Explore Partner SmartStore Offerings

SmartStore Tag  on Community.splunk.com