Running swap on RAID10 or RAID5?
In follow-up to this previous question (and it's excellent answer), I am curious to know if running swap on a RAID5 might not be better than on a RAID10.
My thinking is that you might lose a bit on the performance because it wouldn't be purely striped, but if a drive goes down you can rebuild it from the missing parity bit.
Perhaps that doesn't make sense, and stripe-and-mirror would be better.
Expansion
Thanks for the comments regarding buying more RAM - however, please note that that is not what this question is about. For the sake of this question, you can presume that the system has already been maxed. I am fully aware of the preference for "real" memory vs "virtual" memory.
Several environments expect or require swap to be configured (one of note is the system I work most heavily with that requires a swap segment at least equal to the physical on the machine for the environment to be supported by the vendor). It is, therefore, prudent to consider the *best way to implement swap for such an environment.
Solution 1:
As the others have said, buy more ram. However Chopper3's answer is not exactly correct.
Given that both provide fault tolerance, and leaving aside capacity, the reason for choosing one over the other would be all about performance - and that depends on the workload. For a system with few processes but big memory requirements (e.g. AI engines, FEA) then you want fast bandwidth - raid 5. For a system with a lot of context switching it's about reducing latency - hence raid10.
Solution 2:
It makes no sense no, R10 would be better, but you should really try to have enough real memory to not have to care.
Solution 3:
Buy more RAM, or at least enough to cover your application's needs. You shouldn't worry about the speed of swap unless you plan on swapping often. You shouldn't plan on swapping often... Either way, RAID 10 is the better option.
Solution 4:
For swap space I would not recommend RAID5. There is a write performance issue with RAID5 that particularly affects workloads that involve many small writes, because every updated block potentially involves an extra read so that the controller can correctly compute the new parity block [on a 3-drive (or 3-plus-hot-spare) R5 array each each stripe has three blocks, two for pure data and one for parity information]. Neither 3-drive RAID10 (not one of the standard arrangements but supported by Linux's software RAID and some hardware controllers, IBM's controllers call it RAID1E).
Of course if you do not actually expect the swap to be used for anything other than holding a few pages that haven't been accessed for days (so they are swapped out to make room for cache/buffers or other more active use), this is all moot and you should just go for what-ever arrangement is right for the rest of your expected workload.
Summary of the options:
Arrangement Reads Writes Space Redundancy
--------------------------------------------------------------------------------------------------------------------------------------------------------------
RAID5 (3 or 3+hs) Similar to RAID0 Often slower than bare drives 2 drives Can survive one drive failing
RAID10 (4 drives) Varies** Usually similar to bare drives** 2 drives Can survive one drive failing and 4 of the 6 "two drive" failure cases
RAID10 (3 drives) Varies** Usually similar to bare drives** 1.5 drives Can survive one drive failing
** RAID1's reads are usually assumed to be around the same performance as bare drives, though an intelligent controller can improve this for both sequential and random access depending on workload pattern and array layout (the Linux RAID10 driver offers a number of layouts that can sometimes offer RAID0 like performance for some workloads). Similar for writes which are usually similar to writes to single drives. Some of the layout options for improving read performance can impact write performance (by increasing the number of head movements required or their average distance) so use the advanced options with care.