Why do we need to set swap space as twice big as our physical memory?

The short answer is "You don't have to".

Depending on the kernel/system type it may make sense to size swap space like that, e.g. in FreeBSD's tuning(7) manpage we find the following rationale behind a swap size at least 2x of the physical memory size:

You should typically size your swap space to approximately 2x main memory for systems with less than 2GB of RAM, or approximately 1x main memory if you have more. If you do not have a lot of RAM, though, you will generally want a lot more swap. It is not recommended that you configure any less than 256M of swap on a system and you should keep in mind future memory expansion when sizing the swap partition. The kernel's VM paging algorithms are tuned to perform best when there is at least 2x swap versus main memory. Configuring too little swap can lead to inefficiencies in the VM page scanning code as well as create issues later on if you add more memory to your machine. Finally, on larger systems with multiple SCSI disks (or multiple IDE disks operating on different controllers), we strongly recommend that you configure swap on each drive. The swap partitions on the drives should be approximately the same size. The kernel can handle arbitrary sizes but internal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across the N disks. Do not worry about overdoing it a little, swap space is the saving grace of UNIX and even if you do not normally use much swap, it can give you more time to recover from a runaway program before being forced to reboot.

Other factors may be important when you decide how much swap space to allocate, where to allocate it, and so on. For example, if you are installing a large server with 128 GB of physical memory it's probably a good idea to avoid pre-allocating 256 GB of disk space for swap that's never going to be used.

On the other hand, having some swap space often makes it possible to grab kernel dumps (e.g. in Open-, Net- and FreeBSD). So it's a good idea to have at least enough swap space to grab a full kernel dump on panic.

There is no absolute rule that fits all cases. You have to read about your specific system's behavior, learn how it works, think about the intended use of the system and decide the best size of swap space that fits your needs.


You don't need to at all. Old versions of windows would treat each page of allocated memory as essentially a mmap on the swap file, so you needed at least your total physical RAM size in swap for it to be useful - this is no longer the case today, and was never the case in Linux, but the rumor persists.

However, there is a case where having at least as much swap as RAM is desirable - hibernation. Since Linux uses the swap file for hibernation (aka, suspend-to-disk), you need enough swap to hold all data in RAM, plus all data that was already swapped out (minus cache RAM). This is only the case for machines that need to hibernate, such as laptops, of course.

Finally, having too much swap can be a bad thing, despite what others may say. Think - if you have 4G of RAM, and on top of that need an additional 8G of swap, do you think your system will still be usable, what with all the swapping to/from disk it's doing? It's often better to have the memory-hogging process be killed right away when you run out of memory, rather than having the entire system slow to an unusable level when it starts spending all its time marshalling data in and out of swap.


A long time ago, there was a common unix variant (I think it was a BSD, but I can't find the reference right now) that would allocate every virtual memory page in the swap space. So if you had as much swap as RAM, the size of your virtual memory would still be the same as your RAM. The usual recommendation then was to have twice as much swap as RAM, which made virtual memory twice as big as RAM.

Modern unices don't behave that way, so the reason for the rule is obsolete (I think it was already obsolete in 1992, so it was never relevant for Linux). But oddly enough the rule survived. If you follow it now, you get virtual memory that's three times your amount of RAM, whereas the original intent was to get twice as much.

Just because the historical reason behind the rule is wrong doesn't mean it's stupid. Disk space has become cheaper, so it can make sense to allocate more swap. How much swap you should have depends a lot on how much RAM you have and how you use it. You can run a system without swap, but then you don't get a chance to choose what programs to kill if your RAM does fill up, and the system may be slower (sometimes it's better to use RAM for cache and swap some program memory out). Allocating too much swap costs a tiny amount of RAM (for kernel data structures) and of course disk space (but nowadays that's usually dirt cheap except on SSD). Having enough swap to fit all your virtual memory is necessary if you want to hibernate.