Why does Windows always use as much Virtual Memory as there is RAM installed?
And why does it want to have a maximum of twice as much of that amount? My System has 32GB of RAM and Windows, by default, sets a minimum of 16MB Virtual Memory, allocated 32GB(!) and recommends 50GB(!!). Even worse on a 64GB RAM system where it recommended over 100GB to be allocated, though it "only" used again the same amount that was available in RAM, 64GB in this case.
As far as I understood the concept of the pagefile is that Windows only expands it when needed and starts of with the minimum amount, though it never does, it always goes into the absolute maximum. This is extremly annoying because disabling the pagefile or not setting it to the equal amount of RAM installed, causes issue in some programs that use lots of memory like 7zip, they just claim that there is not enough memory to allocate even though there is plenty enough of free usable memory.
This behavior potentially decreases the lifespan of my drives (SSDs) tremendously. Why does Windows do this and how can I prevent this, or alternatively how can I disable the pagefile completey without getting weird behavior in some programs.
Solution 1:
First, it is a giant mistake (not yours) that Windows' dialog, the one where you set the pagefile size, equates the pagefile with "virtual memory." The pagefile is merely the backing store for one category of virtual address space, that used by private committed memory. There is virtual address space that is backed by other files (mapped files), and there is v.a.s. that is always nonpageable and so stays in RAM at all times. But it's all "virtual memory" in that, at least, translation from virtual to RAM addresses is always at work.
Your observation is correct: Windows' allocation of pagefile size uses a simple calculation of default = RAM size and maximum = twice that. (It used to be 1.5x and 3x.) They have to set it to something, and these factors provide a result that's almost always enough. It also guarantees enough pagefile space to catch a memory dump if the system crashes (assuming you've enabled kernel or full dumps).
As far as I understood the concept of the pagefile is that Windows only expands it when needed and starts of with the minimum amount, though it never does, it always goes into the absolute maximum.
Ah... it starts out with the "initial size". This is NOT the "minimum allowed". So that is why you are seeing it at your RAM size, because Windows uses that for the initial size.
But... are you saying you are seeing the actual pagefile size go to the maximum setting? e.g. if it is set to 16 GB initial, 32 GB max, you're seeing the actual size ("currently allocated") at 32 GB? It should always revert to the initial size when you reboot, btw.
Do you see "system is running low on virtual memory" pop-ups? 'cause you should, when the OS expands the pagefile beyond the current size.
The OS is not going to enlarge the pagefile unless something has actually tried to allocate so much private committed memory that it needs the enlarged pagefile space to store the stuff. But, maybe something has. Take a look at Task Manager's Processes tab. The "Commit size" column shows this for each process. Click on the column heading to see who the hog is. :)
This is extremly annoying because disabling the pagefile or not setting it to the equal amount of RAM installed, causes issue in some programs that use lots of memory like 7zip, they just claim that there is not enough memory to allocate even though there is plenty enough of free usable memory.
This has to do not with available RAM but with something called "commit charge" and "commit limit". The "commit limit" is the sum of (RAM - nonpageable virtual memory) + current pagefile size. (Not free RAM, just RAM.) So a system with say 8 GB RAM and 16 GB current pagefile would have a commit limit of about 24 GB ("about" because the RAM that holds nonpageable contents does not count toward the commit limit).
The "commit charge" is how much private address space currently exists in the system. This has to be less than the commit limit, otherwise the system cannot guarantee that the stuff has a place to be.
On task manager's Performance tab you can see these two numbers with the legend "Commit (GB)". e.g. I'm looking at a machine that says "Commit (GB) 1/15". That's 1 GB current commit charge out of 15 GB limit.
If a program like 7zip tries to do e.g. a VirtualAlloc of size > the (commitLimit - commitCharge), i.e. greater than the "remaining" commit limit, then if the OS can't expand the pagefile to make the commit limit big enough, the allocation request fails. That's what you're seeing happening. (Windows actually has no error message for "low on physical memory", not for user mode access! Only for virtual.)
It has nothing to do with free RAM, as all RAM (minus the tiny bit that's nonpageable) counts toward the commit limit whether it is currently free or not.
It's confusing because when you look at the system after one of these allocation failures, there is nothing apparently wrong - you look at the system, your commit charge is well below the limit, you may even have a lot of free RAM. You'd have to know how much private committed memory the program had been trying to allocate in order to see what the problem was. And most programs won't tell you.
Sounds to me like 7zip is being far too aggressive at trying to allocate v.a.s., maybe it is scaling its requests based on your RAM size? Are you sure there is no smaller pagefile setting where 7zip would be happy? Are you using the 32-bit or 64-bit version of 7-zip? Using the 32-bit version would fix this, since it can't possibly use more than 2 or maybe 3 GB of virtual address space. Of course it might not be as fast on huge datasets.
This behavior potentially decreases the lifespan of my drives (SSDs) tremendously.
Well, no, not really. Simply putting a pagefile out there of whatever size does not mean the system will actually be writing that much to the pagefile. (Unless you have the option set to "clear pagefile on shutdown", and even then I don't think it writes the whole thing; the Mm knows what blocks are in use and should only write those... I've never thought of that before; I'll have to check.)
If you want to look at how much stuff is really in your pagefile, use the PerfMon utility. There is a counter group for Page file, and you of course want the "% usage" counter. Interpret this percentage per the file's actual size (as shown in Explorer).
It DOES use a lot of space and since space on an SSD is pretty dear, this is a concern for most of us. One thing you might try is putting a reasonable-sized pagefile, say 4 or 8 GB, on your SSD, then attach a spinning rust drive and put a large pagefile on that. Or if you want an SSD and nothing else for your pagefile, buy a cheap small one just for the second pagefile.