RAID recommendation for managing redundancy of 16 disks

Solution 1:

Try network-based redundant storage, e.g. CEPH. You may set it to store 9 copies of each block and set up so it'll be storing all of them on different OSDs, so each copy will be on the another device; in this case you truly can remove 8 OSDs and still have at least one copy of each block on the remaining system.

Yes, very inefficient in terms of redundant storage, but those was exactly by your requirement. I consider it as very exaggerated, up to the meaninglessness. The world seem to reach consensus nobody really needs so many copies. Martian rovers have three computers, and that's enough even in the place where conditions are extremely harsh and somebody to fix them is at least half-year away.

Better invest into the system that has live repairing ability than having so many static copies. CEPH does exactly that: you specify you need, say, 3 copies of each block, and those copies must not be co-located. Now, should some device become inaccessible, the system finds that out, and it knows which blocks were stored on it; so it immediately begins redistributing them, so to reach required redundancy again. You may set it up so it'll block access if there is only 1 copy left, so so it will have a chance to repair (distribute that copy and resume access). You may create several pools with different requirements in a single cluster. Should you extend storage, you just add more OSDs.

Solution 2:

You did not specify the operating system, which is an essential requirement for the answer.

Personally I would not adopt such a mechanism, because there is not sufficient redundancy and too slow recovery time.

"Backups are already planned, but still, if the RAID volume stops working, several mission-critical servers and services that depend on it will stop working as well."

I would therefore operate with a zfs pool on FreeBSD (v12, not 13 with openzfs, still not mature enough for me) replicated on another pool (using for example syncoid/sanoid) on a different machine (if possible) or even cheap iSCSI NAS-based device

A sort of "hardware-RAID", but "full" (if one machine goes down, the other can be used)

In fact, a single breaking/failure point is not only the volume, but also and above all the machine to which it is connected.

Also, always to minimize recovery time during any problems, I never recommend volumes of more than 8 disks, because it is much, much easier to find RAID controllers (flashed in non-RAID mode for zfs) for up to 8 drives.

What do you do if, for example, a SAS controller with 16 connectors fails?

You can't take the disks and connect them to the SATA controllers of a $500 computer bought in the emergency.