Here's a summary of what was measured for the laptop drive tested in detail above, as well as a desktop drive, both alone and in a RAID array, as a second useful data point:
Disks |
Seq read |
Seq write |
Bonnie++ seeks |
sysbench seeks |
Commits per second |
Seagate 320 GB 7200.4 Laptop |
71 |
58 |
232 @ 4 GB |
194 @ 4 GB |
105 or 1048 |
WD160 GB 7200 RPM |
59 |
54 |
177 @ 16 GB |
56 @ 100 GB |
10212 |
3X WD160 GB RAID0 |
125 |
119 |
371 @ 16 GB |
60 @ 100 GB |
10855 |
Note how all the seek-related information is reported here relative to the size of the area being used to seek over. This is a good habit to adopt. Also, note that in the laptop rate, two commit rates are reported. The lower value is without the write cache enabled (just under the rotation rate of 120 rotations/second), while the higher one has it turned on--and therefore provides an unsafe, volatile write cache.
The other two samples use an Areca ARC-1210 controller with a 256 MB battery-backed write cache, which is why the commit rate is so high yet still safe. The hard drive shown is a 7,200-RPM 160-GB Western Digital SATA drive, model WD1600AAJS. The last configuration there includes three of that drive in a Linux Software RAID0 stripe. Ideally, this would provide 3X as much performance as a single drive. It might not look like this is quite the case from the Bonnie++ read/write results: those represent closer to a 2.1X speedup. But this is deceptive, and once again it results from ZCAV issues. Using the Bonnie++ ZCAV tool to plot speeds on both the single drive and RAID0 configurations, you get the following curves:

The max and min transfer speed numbers derived from the raw data are almost exactly tripled here:
- 3 X 37=111 MB theoretical min; actual is 110 MB
- 3 X 77=231 MB theoretical max; actual is 230 MB
That's perfect scaling, exactly what you'd hope to see when adding more disks to a RAID array. This clearly wasn't doing the right thing when only looking at average performance, probably because the files created were not on exactly the same portion of the disk to get a fair comparison. The reason why ZCAV issues have been highlighted so many times in this chapter is because they pop up so often when you attempt to do fair comparison benchmarks of disks.