LSI RAID: Write cache policy affects read performance?

By using a writeback cache, you are saving disk IOPS. The controller can batch up smaller writes into one big write.

Thus, there are more IOPS available for reads.

This assumes that the tests are concurrent. If a given test is only reads or only writes this won't matter.


What filesystem is this test run on?

What comes to mind is atime. If your filesystem is mounted with atime option or missing no/relatime mount option you will get a write for every read.

(atime means recording last access time for files)

It might be helpful if you post the output of

mount

and specify on which device you did the tests.