How do storage IOPS change in response to disk capacity?

All other things being equal, how would a storage array's IOPS performance change if one used larger disks.

For example, take an array with 10 X 100GB disks.

Measure IOPS for sequential 256kb block writes (or any IOPS metric)

Let's assume the resulting measured IOPS is 1000 IOPS.

Change the array for one with 10 X 200GB disks. Format with same RAID configuration, same block size, etc.

Would one expect the IOPS to remain the same, increase, or decrease? Would the change be roughly linear? i.e. increase by 2X or decrease by 2X (as I've increased the disk capacity by 2X)

Repeat these questions with 10 X 50GB disks.

Edit: More Context

This question resulted as a conversation among my Sysadmin team that is not well versed in all things storage. (Comfortable with many aspects of storage, but not the details of managing a SAN or whatever). We are receiving a big pile of new Netapp trays that have higher disk capacity per-disk -- double capacity -- than our existing trays. The comment came up that the IOPS of the new trays would be lower just because the disks were larger. Then a car analogy came up to explain this. Neither comment sat well with me so I wanted to run it out to The Team, i.e. Stack-Exchange-land.

The car analogy was something about two cars, with different acceleration, the same top speed, and running a quarter mile. Then change the distance to a half mile. Actually, I can't remember the exact analogy, but since I found another one on the interwebz that was similar I figured it was probably a common IOPS analogy.

In some ways, the actual answer to the question doesn't matter that much to me, as we are not using this information to evaluate a purchase. But we do need to evaluate the best way to attach the trays to an existing head, and best way to carve out aggregates and volumes.


To answer your question directly - all other things being equal = no change whatsoever when GB changes.

You don't measure IOPS with GB. You use the seek time and the latency.

I could re-write it all here but these examples below do all that already and I would simply be repeating it:

https://ryanfrantz.com/posts/calculating-disk-iops.html

http://www.big-data-storage.co.uk/how-to-calculate-iops/

http://www.wmarow.com/strcalc/

http://www.thecloudcalculator.com/calculators/disk-raid-and-iops.html


I know this is probably a hypothetical question... But the IT world really doesn't work that way. There are realistic constraints to consider, plus other things that can influence IOPS...

  • 50GB and 100GB disks don't really exist anymore. Think more: 72, 146, 300, 450, 600, 900, 1200GB in enterprise disks and 500, 1000, 2000, 3000, 4000, 6000GB in nearline/midline bulk-storage media.

  • There's so much abstraction in modern storage; disk caching, controller caching, SSD offload, etc. that any differences would be difficult to discern.

  • You have different drive form factors, interfaces and rotational speeds to consider. SATA disks have a different performance profile than SAS or nearline SAS. 7,200RPM disks behave differently than 10,000RPM or 15,000RPM. And the availability of the various rotational speeds is limited to certain capacities.

  • Physical controller layout. SAS expanders, RAID/SAS controllers can influence IOPS, depending on disk layout, oversubscription rates, whether the connectivity is internal to the server or in an external enclosure. Large numbers of SATA disks perform poorly on expanders and during drive error conditions.

  • Some of this can be influenced by fragmentation, used capacity on the disk array.

  • Ever hear of short-stroking?

  • Software versus hardware RAID, prefetching, adaptive profiling...

What leads you to believe that capacity would have any impact on performance in the first place? Can you provide more context?

Edit:

If the disk type, form factor, interface and used-capacity are the same, then there should be no appreciable difference in IOPS. Let's say you were going from 300GB to 600GB enterprise SAS 10k disks. With the same spindle count, you shouldn't see any performance difference...

However, if the NetApp disk shelves you mention employ 6Gbps or 12Gbps SAS backplanes versus a legacy 3Gbps, you may see a throughput change in going to newer equipment.