Deployment Architecture

Any IOPS recommendations for cold bucket block volume? Looking for cheap storage.

adnankhan5133
Communicator

The volume where the hot/warm data resides shall be running on a disk with at least 1200 IOPS, since Splunk Enterprise Security is part of our deployment. Is there a "safe" IOPS estimate for the cold bucket volume? Cold data will not be accessed regularly since we're sizing our indexes to maintain around 30 days of data in the hot/warm buckets (based on the amount of incoming volume) and the remaining 60 days in the cold buckets. We have a requirement to keep 90 days of data online.

Would searches that look for data located in our cold bucket still be able to execute if the cold buckets were running on less than 1200 IOPS? I was thinking 300-400 IOPS since I'm trying to conserve costs associated to disk performance and I/O.

Labels (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

for any disks which splunk needs access regular base I don’t recommend less than 800 IOPS. This fits also for colddb.

if you use less than that then those slow IOPS will hit you when you are ingesting new data and hot part become full and splunk needs to move some buckets to cold before it could add more new events to hot buckets.

if you need cheaper disk then S3 with SmartStore is probably the best option.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

🍂 Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...