Can we run Linux in something faster than RAM?
Solution 1:
Linux, or any other OS does not know how the RAM works. As long as the memory controller is properly configured (e.g. refresh rates set for non-SRAM) then the OS does not care is it runs on plain dynamic memory (plain RAM), fast page mode RAM (FP RAM, from the C64-ish times), Extended data out mode RAM (EDO) , synchronious RAM (SDRAM), any of the double data rate SDRAMS (DDR 1/2/3) whatever.
All of those support reading and writing from random places. All will work.
Now cache is a bit different. You do not have to write to it for the contents to change. That will get in the way. Still, it is somewhat usable. I know that coreboot uses the cache as a sort of memory during boot, before the memory controller is properly configured. (For the details, check out the videos from the coreboot talks during FOSDEM 2011).
So in theory yes, you could use it.
BUT: For practical tasks a system with 1 GB 'regular' 'medium speed' memory will perform a lot better than with only a few MB super fast memory. Which means you have three choices:
- Build things the normal 'cheap' way. If you need more speed add a few dozen extra computers (all with 'slow' memory)
- Or build a single computer with a dozen times the price and significantly less then a dozen times the performance.
Except in very rare cases the last is not sensible.
Solution 2:
Yes, you can, and this is in fact how it's already done, automatically. The most frequently used parts of RAM are copied in cache. If your total RAM usage is smaller than your cache size (as you suppose), the existing caching mechanism will have copied everything in RAM.
The only time when the cache would then be copied back to normal RAM is when the PC goes to S3 sleep mode. This is necessary because the caches are powered down in S3 mode.
Solution 3:
Many CPUs allow the Cache to be used as RAM. For example, most newer x86 CPUs can configure certain regions as writeback with no-fill on reads via MTRRs. This can be used to designate a region of the address space as - effectively - cache-as-ram.
Whether this would be beneficial is another question - it would lock the kernel into RAM, but at the same time would reduce the effective size of the cache. There might also be side effects (such as having to disable caching for the rest of the system) that would make this far slower.
Solution 4:
In x86 there's a thing called CAR (Cache as RAM) which allows you to write "bare-metal" code such as bootloaders or BIOS routines in a high-level language like C instead of assembly. Many other architectures may have the same feature
So it's possible for some OS to run entirely in cache. Imagine having a Ryzen™ Threadripper™ 3990X with total 292 MB of cache. That's more than enough to run even some modern tiny Linux. I guess you'll need significant changes to the Linux kernel to make it work, but it's definitely possible
For more information read
- A Framework for Using Processor Cache as RAM (CAR)
- CAR: Using Cache as RAM in LinuxBIOS
- Can a CPU function with nothing more than a power supply and a ROM, using only the internal cache as RAM?
- Cache-as-Ram (no fill mode) Executable Code
- What use is the INVD instruction?
- How Does BIOS initialize DRAM?