SSDs as Linux swap for large virtual mem applications?

Solution 1:

I would suggest that SSDs are not good for a swap partition because their performance degrades over time with a large number or writes. This has to do with the fact that SSDs have a limited lifetime of writes, and therefore all kinds of tricks are played to minimize the number of times a single sector is rewritten.

Solution 2:

I'm afraid I'm going to disagree with the other responses. Yes, an SSD will only take something like 100K writes. For a 100GB drive, that means writing 10^16 bytes, or a steady stream of 100MB/s for 3000 years. Even if load balancing is so bad that you only get 1% of that...well. Also, performance degradation is taken care of with discard support, and modern drives don't degrade noticeably with use.

Yes, having more servers and RAM is even better, but while you get 100GBytes of SSD for maybe $200 these days, a server with 100GBytes of RAM will cost you about 100x that - just for the RAM. Power consumption will likely also be a factor of 100.

I think SSD would be great for swap, going from a handful of IOPS to tens of thousands is just what you need. But: I'm only opinionating here - I'd love to see real numbers based on SSD swapping.

Edit: to answer the OP, I agree the kernel is likely to be smarter than you (no offense! :-), so you could try to overallocate with your old rotating disk, too.

Solution 3:

My guess is your performance won't scale with your cost and efforts. My gut tells me you may be MUCH better off with additional servers packed full of RAM if you can partition your data cache in a way that makes sense.

SSD has the benefit of near zero latency (comparatively) in retrieving data, but the various buses that connect it with main memory or the network are going to slow it down considerably.

Solution 4:

In addition to the two other very good responses, you may want to look at KSM as a way of combining identical data in ram. It was merged into linux for the 2.6.32 release.