Installation

Need recommendations for RAID configuration (official Splunk response appreciated)...

tmeader
Contributor

We've been running an install of Splunk for approx 3.5 years now (originally starting with a Splunk 2.0 install and continuously migrating forward), and we're finally hitting a point where we'll be able to reconfigure our storage setup for Splunk in the next few weeks. The hardware that we have/will be working with is as follows:

-Dell 2950 (2 Dual Core Xeon, 16GB, CentOS x64, PERC5e RAID controller) -10TB of 7200RPM SATA drives (10 1TB disks) -Dell MD1000 drive enclosure with SAS connection to the 2950 -/opt/splunk directory is installed/mounted off the SAS connected MD1000 (which allows us to simply plug in our hot spare 2950 as the indexer/web head should the primary ever fail)

When we originally did the install, we only had 700GB drives, and only 5, so space was a premium (we have a daily volume of about 15-20GB indexed data), thus it was setup as RAID5.

For a couple years, things seemed alright, but within the past year we've experienced major problems with any search going back more than 24 hours absolutely taking forever (often times even in very specific searches as well). So, while we have this opportunity, we'd like to pick the best config to migrate our /opt/splunk area to. Since the current volume is ~2.7TB in size, even using RAID10 (and thus getting 5TB from the new drives) would still be a sizable space increase. Alternatively, we've been looking at RAID6, but find conflicting information online as to whether there is ANY speed advantage with RAID6 over RAID5 at all. I'm guessing that RAID10 is going to be optimal, but I'd like to get solid confirmation and recommendations from others as to the best course of action before moving ahead. There is also a chance that we could possibly migrate a newer AMD based Dell 2970 (Dual Quad Core Opteron) in-place of the Xeon Dual Dual Core box. Given that the speed of the drives is likely going to be the biggest bottleneck regardless, wondering if that 2950 swap would be worth the hassle?

Thanks in advance for any and all recommendations or suggestions.

1 Solution

dwaddle
SplunkTrust
SplunkTrust

RAID 10 is going to give superior performance to RAID5 and RAID6 in almost every workload. You don't have reads to recompute parity for every write, and you have more potential spindles from which to complete a read. See http://answers.splunk.com/questions/567/how-well-does-a-indexer-configured-w-raid-5-or-6-perform for additional info.

Another general rule is that more memory in your indexer node is going to make for better performance. If you can put 64GB (or more!) of memory in your indexer and let most of it be used for the OS filesystem cache, that will help.

Caching in your RAID controller can help some as well. I am not familiar with the PERC5e, but have seen how a RAID controller's cache can help with I/O bottlenecks - particularly on writes.

How you configure your hot/warm/cold bucketing in Splunk can also affect search performance. One big /opt/splunk filesystem is probably not ideal. It might be worth your effort to take a small set of your drives and put them aside into a smaller RAID group and keep your hot buckets there while putting your warm/cold buckets in the larger RAID group - that way you don't have disk contention between your indexing of new data and your searching of older data.

Also, consider distributed search. 20GB / day is on the smaller side for distributed search, but the partitioning of your dataset across multiple machines could make search substantially faster. See http://blogs.splunk.com/2009/10/27/add-a-server-or-two/

Finally, I would expect searches across a narrow range of time to still happen relatively quickly. If you are having searches over a narrow time range that still run for a long time, you may not have your bucketing configured optimally (eg, each bucket contains a large time range of data). You might try the dbinspect search command (http://www.splunk.com/base/Documentation/4.1.3/SearchReference/Dbinspect) to check the min and max timestamp in each bucket -- and it might not be a bad idea to contact support and discuss this with them.

View solution in original post

araitz
Splunk Employee
Splunk Employee

Searches over 24 hours shouldn't be causing this behavior, and your comment about specific searches makes me suspicious.

First, check to make sure your data store is healthy:

  • Only a handful of tsidx files in each bucket
  • Buckets don't have much time span overlap
  • Bucket sizes are fairly uniform

Next, check to make sure your search is optimal. Use the Search Profiler to identify the bottlenecks. If you are using flashtimeline for reporting, stop! Use charting, and if speed is paramount uncheck the "enable preview" checkbox.

Next, keep an eye on your cpu and i/o utilization, especially with regard to different search concurrency and time span, looking for excessive wait or periods of idle when you would expect activity.


To answer your question, here is my opinion based on several benchmarks that I performed last year. RAID 5 and RAID 10 offer "more or less" the same read performance. Sorkin correctly points out that there is a reduction in aggregate throughput which can affect RAID 5 reads, thus the "more or less". RAID 10 offers better write performance and thus better concurrency in read/write operations. RAID 5 provides more usable space than RAID 10.

It shouldn't matter to much unless you have very precise search speed requirements and/or are indexing excessively (>=100GB/day/indexer). Faster disks are another performance consideration for all kinds of searches, from sparse to dense. Faster cores will make search faster, while more cores will provide greater search concurrency. Distributing across more Splunk servers is the way to go, as dwaddle mentioned.

I wish that Sun/Dell wouldn't have pushed PERC so hard, as the controllers always seemed to offer inferior performance to their now-vanished competitors. Oh well!

dwaddle
SplunkTrust
SplunkTrust

RAID 10 is going to give superior performance to RAID5 and RAID6 in almost every workload. You don't have reads to recompute parity for every write, and you have more potential spindles from which to complete a read. See http://answers.splunk.com/questions/567/how-well-does-a-indexer-configured-w-raid-5-or-6-perform for additional info.

Another general rule is that more memory in your indexer node is going to make for better performance. If you can put 64GB (or more!) of memory in your indexer and let most of it be used for the OS filesystem cache, that will help.

Caching in your RAID controller can help some as well. I am not familiar with the PERC5e, but have seen how a RAID controller's cache can help with I/O bottlenecks - particularly on writes.

How you configure your hot/warm/cold bucketing in Splunk can also affect search performance. One big /opt/splunk filesystem is probably not ideal. It might be worth your effort to take a small set of your drives and put them aside into a smaller RAID group and keep your hot buckets there while putting your warm/cold buckets in the larger RAID group - that way you don't have disk contention between your indexing of new data and your searching of older data.

Also, consider distributed search. 20GB / day is on the smaller side for distributed search, but the partitioning of your dataset across multiple machines could make search substantially faster. See http://blogs.splunk.com/2009/10/27/add-a-server-or-two/

Finally, I would expect searches across a narrow range of time to still happen relatively quickly. If you are having searches over a narrow time range that still run for a long time, you may not have your bucketing configured optimally (eg, each bucket contains a large time range of data). You might try the dbinspect search command (http://www.splunk.com/base/Documentation/4.1.3/SearchReference/Dbinspect) to check the min and max timestamp in each bucket -- and it might not be a bad idea to contact support and discuss this with them.

ftk
Motivator

The PERC5e's do cache, and it works well. Plenty of improvements in the 6 line, but 5 works fine. Adaptive read ahead and write back caching work best in my setup.

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...