write-through RAM disk, or massive caching of file system?

Consider using an ext4 filesystem using fast-and-loose mount options:

noatime,data=writeback,nobh,barrier=0,commit=300

to postpone writing data from cache back to physical disk.

Other than that, you could use aufs to union-mount a tmpfs-filesystem on top of your regular filesystem, do all the writing and then merge the tmpfs back down to the real filesystem afterwards.


Are you seeing high numbers of IO waits, indicating that the read and write requests are not being satisfied via existing buffers? As others have noted, Linux is very good about giving spare RAM to buffers, so you should check this first.

If you're not seeing IO waits, then it's possible that your performance problems (do you even have problems? your question doesn't say) are due to kernel context switches from lots of small program-initiated IO operations. In this case you can gain a significant performance boost by rewriting your application to use memory-mapped files. But that's more of a question for StackOverflow.


Linux by default uses any spare RAM as a file cache, so no configuration is necessary for that.

You may want to consider using ext4 as the filesystem. It uses quite a number of techniques to speed up disk access, including delayed allocation which:

This has the effect of batching together allocations into larger runs. Such delayed processing reduces CPU usage, and tends to reduce disk fragmentation, especially for files which grow slowly. It can also help in keeping allocations contiguous when there are several files growing at the same time.

Data loss is pretty rare due to the use of journaling.

Ext4 is now the default filesystem in recent releases of Linux, though you will probably want to make sure the kernel you use is at least 2.6.30