Here is the Q&A from the live session. Q: Say I run a search, but the data is not in cache. So it downloads from the S3. But wouldn't downloading take extra time and slow my search? A: Yes, if not in your cache there could be a time delay. If you have frequent need to run rare searches, SmartStore might not be appropriate for your purposes, as rare searches can require the indexer to copy large amounts of data from remote to local storage, causing a performance impact. This is particularly the case with searches that cover long timespans. If, however, the searches are across recent data and thus the necessary buckets are already in the cache, then there is no performance impact. Q: Are there plans to integrate SmartStore with Azure? A: Sounds like there are plans for it to be part of the roadmap, but the timeline is currently TBD. Q: We have a requirement for saving 7 years of indexed data. Only latest 180 days of data is frequently used rest is only used very rarely, so we would like use "Amazon S3 Intelligent-Tiering" instead of "Amazon S3 Standard" for smart-store. Does Splunk support it ? If not is there any plan to support in future. A: SmartStore only supports Amazon S3 Standard storage class. Q: How do you mitigate cache blowout caused by high scan rate deep searches? A: In general, the SmartStore cache manager ensures the cache is optimized for most recent data and frequently searched data. In cases, where there is a need for running searches with data from long time ranges, there is a potential for cache being full. If this is not a frequent scenario, we should be fine, but if it is a recurring scenario (e.g deep searches running on a frequent schedule), we would need to adjust some of the SmartStore settings to fit the needs. Please find the below link configuring Smartstore cache manager and troubleshooting SmartStore issues https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/ConfigureSmartStorecachemanager https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/TroubleshootSmartStore Q: Do I need one AWS S3 bucket for all Indexers and indexes? Or what is the formula here? A: Yes, as part of the SmartStore setup, we need to create corresponding buckets in AWS S3 or S3-API complaint object stores. Below is a general formula for computing the storage needs Remote Object Store sizing = Daily Ingest Rate x Compression Ratio x Retention period Compression ratio is generally 50% Please find the below link for configuring SmartStore https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/ConfigureremotestoreforSmartStore
... View more