RAID configuration for large NAS
There is already a RAID level for what you want; it's called RAID 10.
The MTBF for professional and consumer level drives has increased by an order of magnitude in recent years, the uncorrectable error rate has stayed relatively constant. This rate is estimated at 10^14 bits, so one bit per 12 terabytes read, for consumer SATA drives, source.
So, for every scan of your passes of your 24Tb drive, statistically you will encounter at least 2 single bit errors. Each of those errors will trigger a RAID5 rebuild, and worse, during rebuild a second error will cause a double fault.
This is precisely my everyday work... building Linux storage servers.
- The Areca card is OK. You can use it in RAID-6, it will provide reasonable security. Buy the optional Battery backup unit, too.
- Use enterprise grade disks, not desktop drives. You'll spend 400 more bucks on your server, but it's well worth it. Buy two spare drives. Don't mess with it, use disks of the same model.
- For the filesystem, use XFS. Not kidding, ext3 and friends simply won't be up to the job for 16TB+ filesystems. Even in case of a serious crash, xfs_repair will be quite fast on a 20TB volume (15 minutes, no more).
- Preferably, use LVM2, it will ease the storage management, even if you don't plan to modify it much.
- install the areca management tool and write a cron job to send you a daily email with an health check.
- Don't forget backup. RAID is not a backup; if someone simply delete an important file, you won't be able to recover without a proper backup. I personnally use rdiff-backup to save all important data on a dedicated server with a one month history; you can also create two RAID volumes on your file server and backup one on the other.