Deployment Architecture

Any IOPS recommendations for cold bucket block volume? Looking for cheap storage.

adnankhan5133
Communicator

The volume where the hot/warm data resides shall be running on a disk with at least 1200 IOPS, since Splunk Enterprise Security is part of our deployment. Is there a "safe" IOPS estimate for the cold bucket volume? Cold data will not be accessed regularly since we're sizing our indexes to maintain around 30 days of data in the hot/warm buckets (based on the amount of incoming volume) and the remaining 60 days in the cold buckets. We have a requirement to keep 90 days of data online.

Would searches that look for data located in our cold bucket still be able to execute if the cold buckets were running on less than 1200 IOPS? I was thinking 300-400 IOPS since I'm trying to conserve costs associated to disk performance and I/O.

Labels (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

for any disks which splunk needs access regular base I don’t recommend less than 800 IOPS. This fits also for colddb.

if you use less than that then those slow IOPS will hit you when you are ingesting new data and hot part become full and splunk needs to move some buckets to cold before it could add more new events to hot buckets.

if you need cheaper disk then S3 with SmartStore is probably the best option.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...