Why didn't my performance increase when I went from 4 disk RAID 10 to 6 disk RAID 10?

I had 4 drives in a RAID10 set and 2 spare disks. We've been monitoring the perfmon Average Disk seconds per transfer (reads and writes) counter. We bought 2 more disks, rebuilt the RAID10 set using 6 disks (2 span) and the performance stayed the same. Is there some limitation that RAID10 only improves based on 4 disk multiples (4, 8, 12, etc)?

I'm using a Dell R510. This is a SQL server. we kept the same files on the volume.


I'm using Dell Perc H700 1GB non-volatile cache.

ADs/T is around 200ms.

These are 15K sas drives 600GB

I don't know how Dell PERC controller does the spans. It just asks how many disks I want in the span (2 or 3 in this case).

The reason we added spindles was to increase the overall time per transaction. we went from RAID1 to RAID10 (4 disks) and the performance doubled. we were hoping for a 33% increase. ADs/T is recommended at 20ms max. we are at 100. we realize we weren't going to get to 20 by adding 2 disks, but since before I was here they were at 400ms, they were going to be ok for the time being in the 50's 60's.

transfers per second are around 850


I understand the logic behind adding disks not speeding up an individual transfer against a given disk. But for all intents and purposes, ADs/T (or read or write too) is a measure of the amount a time it takes to do something against a given disk/volume according to windows. So if a whole transaction take 40 milliseconds but in reality it's writing to 4 disks, it's spending theoretically 10ms per disk if done in parallel. There's always the chance it's writing as fast as it can to one disk and then going on to the next, but how can someone tell which it's doing?

so for that matter, the time it takes for 4 disks should be proportional to the 6 disks. windows should see it as faster even though if a disk has reached it's max potential, each disk won't be faster.

ssd isn't an option for us due to the large size of our indexes, the total space we need for all of our sql files, and the cost. SSD sizes just aren't there yet though for smaller size files it might make sense.

if adding disks won't help improve speed how else does one explain the 50% performance increase when going from 2 to 4 disks and then nothing when going from 4 to 6?


You should dig deeper into SQL performance stats as well as drive stats. If you have lots of connections it may help to make multiple smaller TempDB files. (read up on that, it was a major gain for us. http://msdn.microsoft.com/en-us/library/ms175527.aspx) If you have a buffer cache ratio below 93 you may want more memory. One statistic isn't enough to diagnose the issue. There is more than one way to thrash storage.

If the box is 2008 use the Resource Monitor to determine which DB files are the busiest. Getting them on different spindles is the best performance fix. Specifically, if you have tons of reads and writes rather than mostly reads you should look into getting a second drive controller, a couple of smaller 15k external drives, and getting your LOG and TempDB files away from the drive set with the data on it.