Keep in mind that the IOPs will vary depending on if they are random or sequential, read or write. Also if the IOs are small or large (larger IO size will result in lower IOPs however higher bandwidth and vise versa). For example, the 1TB 7.2K 6Gb SATA WD drives performance varies under different workloads, including some "high IOPs" due to sequential read bufferings.
Sequential read IOPs should be higher due to buffering at the server, adapter, raid card or controller as well as drive. Most drives today have some amount of read cache even some of the lower cost SATA drives. Likewise some of the higher-end SATA drives will have larger read caches not to be confused with the hybrid/HHDD/SSHD devices which do even more caching of reads and with some, also cached writes. Something else that will impact the IOPs are how much concurrency or activity being sent to the drive, granted the drive has to be able to support the activity. Otoh, a drive that can support more IOPs may not be being pushed due to how much concurrent work being done. Other factors and considerations include the OS, queue depths, driver config, interface (e.g. 3Gb, 6Gb, 12Gb SAS or SATA), RAID among other things.
... View more
Here are some measured results of how many IOPs can be done across various HDDs, SSDs, SSHD/HHDDs with varying workload (IO Size, reads, writes, random, sequential) along with associated bandwidth as well as latency. Of course your millage will vary...
... View more
Btw, I mentioned HHDDs (Hybrid Hard Disk Drives) however I did not say what they are (my bad).
Here is a link (
2011 Summer momentus hybrid hard disk drive (HHDD) moment
to a series of posts that I have been doing covering how I have been using HHDDs for over a year to boost performance in my workstations and laptops. In simple terms, solid state devices (SSD) are very fast providing many times more IOPS (or bandwidth) than HDDs, however at a higher cost. HDDs provide a large amount of space capacity and lower number of IOPs compared to SSDs for a given price.
HHDDs provide best of both worlds in that they are externally physically identical to a HDD and depending on implementation will be plug compatible with your controller/adapter transparent to your server or workstation or laptop operating system.
There should be no need for special drivers, adapters, migration or management software such as is the case with tiering or automated movement technologies. The benefit with HHDDs is that you get some performance boost in that there is a small SSD integrated with the internal HDD processor to function as a super persistent (no data loss when power turned off) cache for the underlying HDD.
Im using the Seagate Momentus XTs which are 7.2K RPM 500GB 2.5" form factor SATA HDDs that have an integrated 4GB SLC flash SSD plus 32MB DRAM buffer. These HHDDs appear to windows or apple OS as a regular disk with no special drivers yet enable more IOPS than a HDD. If the data you are working with on a regular basis is small fitting into the 4GB ssd buffer you will see performance the same or very close to a SSD. However as you fill up the SSD buffer and until the data gets destaged to the HDD, performance will shift back to what a traditional HDD provides. Likewise as data is read from the HDD into the SSD buffer, performance will increase. Oh, why not go all SSD? simple, cost!
My 500GB HHDDs give me for my needs performance comparable to SSDs while providing several times more space (which tends to be dormant) at a fraction of the cost. I also have SSDs that I use for some things where the focus is on as many IOPSs or as much performance as possible in a given amount of time. Hence if you have the need for speed and budget, go with the SSDs to reduce the number of HDDs needed. On the other hand, if you need to have a lot of data yet random performance needs, check out the HHDDs.
... View more
Dwaddle is spot on with "...it depends..."
There is another aspect to bring into the discussion about I/O Operations Per second (IOPSs) which is the average size in kbytes of the work being done. As Dwaddle mentioned, various types of hard disk drives (HDDs), Hybrid HDDs (HHDDs) and solid state devices (SSDs) will have different IOP capabilties however those metrics are tied to work being done of a given size. The smaller the IO size, the higher the IOP and lower the bandwidth or throughput numbers, likewise, the larger the IO size, higher bandwidth or throughput will be seen with lower IOPs. Also keep latency or response time in mind which for IOPs should be lower for interactive or transactional activity.
Even though some HDDs can do 210 or more IOPSs, those IOPs are of a small size say 4Kbytes to 8Kbytes (check with specific vendors spec sheets) however depending on how those HDDs are attached to your computer or server will make a different. In addition to the type and speed of the interconnect (USB 2.0/3.0, SAS 3G/6G, 1GbE iSCSI, Fibre Channel, SATA, etc) the type of adapter, controller or storage system will also help or in some cases hinder the native drive performance. Generaly the controller, adapter or storage system should be neutral or boost performance with caching and other techniques. However it is possible where a controller or adapter or storage system can be implemeneted in a manner where the full drive potential is not realized. Likewise there are controllers, adapters and storage systems that are actually starved by not having enough HDDs to service IO which also tend to be a sign of the need for SSD or faster drives.
Also as Dwaddle mentions, RAID configuration can make a big differences with RAID 0 (stripe) being the fastest for reads and writes however it also provides no protection, in fact it actually introduces the risk of a single drive failure taking the entire stripeset off line. As a result it is typically only used for static or read data that can be rapidly restored from another disk, tape or other means. RAID 1 (mirroring) gives both good read and write performance however it has the highest capacity space overhead of the RAID levels. RAID 10 (1+0) and (0+1) stripe & mirror/mirror & stripe provide a good balance of performance with protection where data are stripped and mirrored (or mirrored and stripped) however at the expense of extra storage capacity. RAID5 is stripe with rotating parity and is very popular for concurrent reads however imposes a write penalty (hence need for battery backed write cache in controller/adapter/storage system) due to parity calculation operations. RAID 6 is popular for using large capacity low cost drives using dual parity with stripe to protect against a double drive failure. Another popular RAID option is RAID 4 which is what NetApp uses as a variation for their default proteciton in addition to the RAID DP (Douple Parity).
Reads will typcially be faster than writes, hence a higher percentage of smaller reads should yield a higher IOP rate (guess what vendors like to show for benchmark numbers? ) and hopefully lower latency. Likewise for high bandwidth or throughout to to show large number of MBytes or GBytes per second, the game is to use very large IOs of 64K, 128K or bigger such as would be seen with backup/restore, streaming video/audio, bulk data movement and other sequential operations. Cache in the controller/adapter/storage system can help particular on reads however there can also be write benefits.
There is much more to the topic, however hopefully that gives you more to think about. If you are interested or want to know more, check out my blog http://storageioblog.com or main website http://storageio.com where you can find articles, tips, videos, podcast, reports and other related content. In addition, in my books Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networkss (Elseiver) I go into more discussion about IOPs, reads, writes, performance, workloads, benchmarking, storage systems and other related topics.
... View more