this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed).

I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B.

Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks.

We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'.

Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume).

Server A (original setup with 6Gpbs disks)

D: Read (MB/s)     63 MB/s
D: Write (MB/s)    170 MB/s
E: Read (MB/s)     68 MB/s
E: Write (MB/s)    320 MB/s

Server B (original setup with 3Gpbs disks)

D: Read (MB/s)     52 MB/s
D: Write (MB/s)    88 MB/s
E: Read (MB/s)     112 MB/s
E: Write (MB/s)    130 MB/s

Server A (new setup with 3Gpbs disks)

D: Read (MB/s)     55 MB/s
D: Write (MB/s)    85 MB/s
E: Read (MB/s)     67 MB/s
E: Write (MB/s)    180 MB/s

Server B (new setup with 6Gpbs disks)

D: Read (MB/s)     61 MB/s
D: Write (MB/s)    95 MB/s
E: Read (MB/s)     69 MB/s
E: Write (MB/s)    180 MB/s

Can anybody suggest any ideas what is going on here?

The drives in use are as follows:

  • Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS
  • Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS
  • Hitachi Hus153030vls300 300GB Server SAS
  • Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

You need to put less focus on the interface max speed and look more at the physical disk performance characteristics as this is typically the bottleneck. As described on this site for the Hitachi Hus153030vls300 300GB Server SAS disk you linked.

In terms of performance the important figures listed on the Hitachi pdf are

  • Data buffer (MB) 16
  • Rotational speed (RPM) 15,000
  • Latency average (ms) 2.0
  • Media transfer rate (Mbits/sec, max) 1441
  • Sustained transfer rate (MB/sec, typ.) 123-72 (zone 0-19)
  • Seek time (read, ms, typical) 3.6 / 3.4 / 3.4

As all of these figures mean the disk will not be able to saturate a 3 Gbps channel there is no point in it having a 6 Gbps channel.

I cannot imagine a raid controller that can utilise each disks' maximum performance in the same array at the same time. So assuming you have a RAID 1 with 2 disks, the first capable of 60MB/s sustained sequential read and write speed and the second only 50MB/s, then writing to the array will be limited to 50MB/s while a decent raid card will be able to have 2 simultaneous read streams, one at 60MB/s and the other at 50MB/s. The more complex the array the more complicated these figures become.

Some other notes

  • the maximum transfer rate of a disk is different in different areas of the disk, typically it is faster at the start of the disk.
  • sequential reads are the fastest sustained operations a disk can do and random read or writes are significantly slower.
  • typically a raid controller will disable a disks' onboard write cache and will only use its own cache for writes if it has a good battery, or you override its default.
  • I have read of some instances of some disk/raid firmware combo's that falsely detect a bad battery and disable all write cache. So update your firmware for both disk and raid controller

There are some disks advertised as 6 Gbps high performance disks that are in fact not that high performance, they just have the 6 Gbps interface, and couldn't even saturate a 3 Gbps link anyway (which would take 357 MiB/s).

The main benefit of 6Gbps sas/sata is for SSDs and port multipliers (ie attaching multiple disks to the 1 sas/sata port)


I'm not very familiar with Windows systems, but here are some points to take in consideration when benchmarking, especially with IOs.

Keep in mind this schema representing the layers between your application and the disks:

Application <=> Filesystem (OS) <=> Disk controller <=> Hard drive

And each part in this have his own method of moving information to the upper and lower part, have his own cache, configuration, etc...

  • Application: (here your tool). Writing large modifications in a big block is better than doing many little writes. Are you waiting for a full flush to the disk, are you doing sequencial access or random access ?
  • Filesystem: There is many parameters here: caching by the OS, data pre-fetching, data block-size
  • Disk controller: He his the central point before accessing hard drives. His configuration will count for 30% of your tweaking. Among it, main points are :
    • Cache Ratio between Read/Write. Depending on your application which may be read or write intensive, you'll configure this ratio accordingly.
    • Battery caching, allowing WRITE-THROUGH or WRITE-BACK methods.
    • Raid level: you must choose the level according to your need of fail-tolerance. RAID0 for 0 tolerance but great performances, RAID1 for fault tolerence but 50% total disk space usable, RAID5/6 for compromise...
  • Hard drive: higher rotation speed will allow you to access quicker to data located in different drive regions. Thus, better for random seek

Also, search about data alignment: I saw Windows creating many times mis-aligned partitions. Thus, when the filesystem wants to write 1 block of 4kb, it results in 2 I/Os to the drive, because the FS block is located on 2 device blocks.

More details would help us to find the bottleneck.

Adrien.


You need to upgrade firmware of the H710, the HDDs and backplane if there is one. If you run Linux, you need to upgrade firmware only.

Also, before doing this you can install Dell Server Admin (OMSA) like 7.3.0.1 at the moment, to check if it will inform you about any issues with incompatibility.

You need to use also same type of drives in the same array if it's SAS.

So basically if you have wrong HDD firmware, old SAS firmware, various SAS drives (even if they are SATA they can run as SAS), there is no way you would ever get consistent performance across all drives.

Actually, if you got just different drive types that could cause this.