Improving mdadm RAID-6 write speed

Solution 1:

Have you tried tuning /sys/block/mdX/md/stripe_cache_size?

According to this forum post (in Norwegian, sorry) "tuning this parameter is more essential the more disks and the faster system you have":

On my system I get the best performance using the value 8192. If I use the default value of 256 the write performance drops 66%.

Quoting his speed for comparison:

Disks: 8xSeagate 2TB LP (5900RPM) in mdadm RAID6 (-n 512) (stripe_size_cache=8192).

CPU: Intel X3430 (4x2.4GHz, 8GB DDR3 ECC RAM)

Speed: 387 MB/s sequential write, 704 MB/s sequential read, 669 random seeks per sec.

My home server has almost the same disks as you, using RAID 5:

Disks: 4x1.5TB WD Green in RAID 5 (stripe_size_cache=256 - the default)

CPU: Intel i7 920 (2.66 GHz, 6 GB RAM)

Speed: 60 MB/s sequential write, 138 MB/s sequential read (according to Bonnie++)

So it looks like sequential write performance is around 50% of read performance.

For what performance to expect, the Linux Raid Wiki says about RAID 5:

Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written).

And about RAID 6:

Read performance is similar to RAID-5 but write performance is worse.

Solution 2:

try

echo 32768 > /sys/block/md0/md/stripe_cache_size

and check ;)