Deployment Architecture

Any IOPS recommendations for cold bucket block volume? Looking for cheap storage.

adnankhan5133
Communicator

The volume where the hot/warm data resides shall be running on a disk with at least 1200 IOPS, since Splunk Enterprise Security is part of our deployment. Is there a "safe" IOPS estimate for the cold bucket volume? Cold data will not be accessed regularly since we're sizing our indexes to maintain around 30 days of data in the hot/warm buckets (based on the amount of incoming volume) and the remaining 60 days in the cold buckets. We have a requirement to keep 90 days of data online.

Would searches that look for data located in our cold bucket still be able to execute if the cold buckets were running on less than 1200 IOPS? I was thinking 300-400 IOPS since I'm trying to conserve costs associated to disk performance and I/O.

Labels (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

for any disks which splunk needs access regular base I don’t recommend less than 800 IOPS. This fits also for colddb.

if you use less than that then those slow IOPS will hit you when you are ingesting new data and hot part become full and splunk needs to move some buckets to cold before it could add more new events to hot buckets.

if you need cheaper disk then S3 with SmartStore is probably the best option.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...