Hi,
We have an index that is feeding in data from an EKS/K8s infrastructure and getting roughly 4million events / 15 minutes (during peak). The index is doing roughly 80GB/day.
Running queries on the data works great if you search within the current day however running historical searches on the data even using the proper fields specific to what I want to search for takes a very long time and the load on my indexers shoots up very high.
I have not modified any of the index params for this index in indexes.conf. This is a smartstore index and I have roughly 500GB of cache setup for caching locally. If anyone could let me know what tweaks might be best for this it would be greatly appreciated.
Hi @jmc94,
Since you have problem only on historical searches, it show that eviction and downloading the buckets from SmartStore takes time. You can check if there is a bandwidth limitation issue between indexers and S3 compatible storage.
Please be sure that your maxDataSize is auto as recommended. If you are using as auto_high_volume it will take much more time on downloading from SmartStore.
Also if the storage is a kind of Scale-out NAS solution, 6k IOPS shown in tests does not work with the way Splunk uses S3. You can check download actions and durations/sizes from internal logs.
index=_internal component=CacheManager
Hi @jmc94,
having so many data it's possible to have delays in answers with old data.
The first question is: what performances have you on that storage?
Have you at least 800 IOPS (better 1200) both on the storage for hot and warm data and also for cold data?
Did you tried to accelerate your searches (using Summary indexes or Accelerated DataModels)?
Ciao.
Giuseppe
Yes we have roughly 6k IOPS available on the backend storage, we have 3 indexers currently. We have not tried Summary indexes or Accelerated DataModels as of yet.
Hi @jmc94,
If you have a correct infrastructure, having so many events, the only way is accelere your searches.
Ciao.
Giuseppe