Deployment Architecture

Any IOPS recommendations for cold bucket block volume? Looking for cheap storage.


The volume where the hot/warm data resides shall be running on a disk with at least 1200 IOPS, since Splunk Enterprise Security is part of our deployment. Is there a "safe" IOPS estimate for the cold bucket volume? Cold data will not be accessed regularly since we're sizing our indexes to maintain around 30 days of data in the hot/warm buckets (based on the amount of incoming volume) and the remaining 60 days in the cold buckets. We have a requirement to keep 90 days of data online.

Would searches that look for data located in our cold bucket still be able to execute if the cold buckets were running on less than 1200 IOPS? I was thinking 300-400 IOPS since I'm trying to conserve costs associated to disk performance and I/O.

Labels (2)
0 Karma



for any disks which splunk needs access regular base I don’t recommend less than 800 IOPS. This fits also for colddb.

if you use less than that then those slow IOPS will hit you when you are ingesting new data and hot part become full and splunk needs to move some buckets to cold before it could add more new events to hot buckets.

if you need cheaper disk then S3 with SmartStore is probably the best option.

r. Ismo

0 Karma
*NEW* Splunk Love Promo!
Snag a $25 Visa Gift Card for Giving Your Review!

It's another Splunk Love Special! For a limited time, you can review one of our select Splunk products through Gartner Peer Insights and receive a $25 Visa gift card!


Or Learn More in Our Blog >>