If RAM is cheap, why don't we load everything to RAM and run it from there?
Solution 1:
There are a few reasons RAM is not used that way:
- Common desktop (DDR3) RAM is cheap, but not quite that cheap. Especially if you want to buy relatively large DIMMs.
- RAM loses its contents when powered off. Thus you would need to reload the content at boot time. Say you use an SSD-sized RAM disk of 100GB, that means about two minutes delay while 100GB are copied from the disk.
- RAM uses more power (say 2–3 watt per DIMM, about the same as an idle SSD).
- To use so much RAM, your motherboard will need a lot of DIMM sockets and the traces to them. Usually this is limited to six or less. (More board space means more costs, thus higher prices.)
- Lastly, you will also need RAM to run your programs in, so you will need the normal RAM size to work in (e.g. 18GiB, and enough to store the data you expect to use).
Having said that: Yes, RAM disks do exist. Even as PCI board with DIMM sockets and as appliances for very high IOps. (Mostly used in corporate databases before SSD's became an option). These things are not cheap though.
Here are two examples of low-end RAM disk cards which made it into production:
Note that there are way more ways of doing this than just by creating a RAM disk in the common work memory.
You can:
- Use a dedicated physical drive for it with volatile (dynamic) memory. Either as an appliance, or with a SAS, SATA or PCI[e] interface.
- You can do the same with battery backed storage (no need to copy initial data into it since it will keep its contents as long as the backup power stays valid).
- You can use static RAMs rather than DRAMS (simpler, more expensive).
- You can use flash or other permanent storage to keep all the data (Warning: flash usually has a limited number of write cycles). If you use flash as only storage then you just moved to SSDs. If you store everything in dynamic RAM and save to flash backup on power down then you went back to appliances.
I am sure there is way more to describe, from Amiga RAD: reset surviving RAM disks to IOPS, wear leveling and G-d knows what. However, I will cut this short and only list one more item:
DDR3 (current DRAM) prices versus SSD prices:
- DDR3: € 10 per GiB, or € 10,000 per TiB
- SSDs: Significantly less. (About 1/4th to 1/10th.)
Solution 2:
Operating systems already do this, with the page cache:
In computing, a page cache, often called a disk cache, is a "transparent" cache of disk-backed pages kept in main memory (RAM) by the operating system for quicker access. A page cache is typically implemented in kernels with the paging memory management, and is completely transparent to applications.
When you read a page from a disk, your operating system will load that data into memory, and leave it there until it has a better use for that memory. If you have sufficient memory, your OS will only read each page once, and then use it from memory from then on. The only reason the OS will do real disk IO is if it needs to read a page that's not already in memory, or if a page is written to (in which case, you presumably want it saved to the disk).
One advantage of doing things this way is that you don't have to load the entire hard drive into memory, which is useful if it won't fit, and also means you don't waste time reading files that your applications don't need. Another advantage is that the cache can be discarded whenever the OS needs more memory (it's better to have your next disk read be slightly slower, than to have your programs crash because they're out of memory). Also, it's useful that users don't need to manually decide what should be in the ramdisk or not: Whatever you use most often will automatically be kept in main memory.
If you have a lot of memory, but your applications aren't running as fast as you would expect, there's a good chance they're slower because they're running safely. For example, SQLite is orders of magnitude faster if you tell it not to wait for writes to complete, but your database will be completely broken if you don't shutdown cleanly.
Also, /tmp
is usually a ramdisk on Linux distros, because it's ok if that data gets lost. There's still some debate over whether that's a good idea though, because if too much data gets written to /tmp
, you can run out of memory.
Solution 3:
As Alan Shutko points out in his comment on the question, RAM isn't actually cheap.
Here are some data points. When I search on Google for 4 GB RAM, 64 GB SSD and 1 TB HDD (mechanical hard drive), here are the costs I see (this is for Aug 25, 2013):
4 GB RAM = $32 - $36 => RAM = ~$8 per GB
64 GB SSD = $69 - $76 => SSD = ~$1 per GB
1 TB HDD = $80 => HDD = $0.08 per GB
Whoa! HDDs are 100x cheaper than RAM! And SSDs are 8x cheaper than RAM.
(Plus, as pointed out in other answers, RAM is inherently volatile, and so you need some other form of persistent storage.)