Hi Community,
I have a use case where the client needs data to be stored over an extended period of time.
That data powers the dashboard that uses datamodels to generate the panels. Since the client wants data to be available for at least 6 months, the idea was to create an index that has hot/warm buckets in SSD and cold buckets in slower storage.
I have two different issues here:
The main problem here is that if we have mixed storage of SSD and HDD, and since all the dashboards are powered by datamodels how much will this affect the performance of Splunk? Will the time to load the dashboard be affected by such a storage model?
Regards,
Pravin
Hi @richgalloway ,
Thanks for your response.
Regards,
Pravin
1. Examine the _bkt field of an event to find out which bucket it's in then correlate that to results from the dbinspect command. The state field will say if the bucket is hot, warm, or cold.
| dbinspect index=foo [ search index=foo | eval bucketId=_bkt | dedup bucketId | fields bucketId | format ] | fields bucketId state
2. Data model data is stored with the index from which it was extracted. The location can be specified with the tstatsHomePath setting in indexes.conf.
Hi @richgalloway ,
Thanks for your response.
Regards,
Pravin
Accelerated data is complete. There are no references to the raw data.