Take a look at the following article at nixCraf, HowTo: Speed Up Linux Software Raid Building And Re-syncing.

It explains the different settings in /proc that can be adjusted to influence the software raid speed. (Not just during building/syncing as the title suggests.)


What kind of RAID?

Any combination of 0 and 1 will give no great improvement to a non-concurrent benchmarks for latency or bandwidth. RAID 3/5 should give better bandwidth but no difference in latency.

C.


The problem is that, in spite of your intuition, Linux software RAID 1 does not use both drives for a single read operation. To get a speed benefit, you need to have two separate read operations running in parallel.

Reading a single large file will never be faster with RAID 1.

To get the same level of redundancy, with the expected speed benefit, you need to use RAID 10 with a "far" layout. This strips the data and mirrors it across the two disks. The disks are each separated into segments. With two segments, stripes on drive 1, segment 1 are copied to drive 2, segment 2. Drive 1, seg 2 is copied to drive 2, seg 1. Detailed explanation.

As you can see with these benchmarks RAID 10,f2 gets read speeds similar to RAID 0:

   RAID type      sequential read     random read    sequential write   random write
   Ordinary disk       82                 34                 67                56
   RAID0              155                 80                 97                80
   RAID1               80                 35                 72                55
   RAID10,n2           79                 56                 69                48
   RAID10,f2          150                 79                 70                55

f2 simply means far layout with 2 segments.

Furthermore, in my personal tests, I found that write performance was suffering. Notice that the above benchmarks suggest that with RAID10,f2 the write speed should be nearly equivalent to a single disk. I found I was getting almost a 30% decrease in speed. After much experimentation I found that changing the IO scheduler from cfq to deadline fixed the issue.

echo deadline > /sys/block/md0/queue/scheduler

Here is some more information: http://www.cyberciti.biz/faq/linux-change-io-scheduler-for-harddisk/

With this setup, you should be able to get sequential reads about about 185-190 MB/s.