Usually the controller's default value is best unless you are experiencing performance problems.
64k is the most common setting I have seen for the chunk size. It seems to work well for most workloads since often both indexing and search will have plenty of data to write/read from disk with each I/O. The smaller you make the chunk size, the more seeks your array will have to do.
you can take a look here, in the "disk subsystem" section:
It really depends on your indexing volume / day and if you're clustering or not.
In their example with 8 drives, you could handle 800 IOPS peaks.
And if hard drives don't bring enough IOPS, you always could consider SSDs instead of hard drives.
Thanks for the pointer. The manual does not provide details on fine tuning RAID (stripe/chunk size) to match the read/write characteristics of Splunk. System is targeted for an indexing volume of 100 GB/day. I am thinking of using the default value of the RAID controller.