How does memory/commit charge work in Windows 10?
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool testlimit
.
Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
- allocate a previously-"available" physical page
- set up the page table entry for the virtual page that was accessed to associate the virtual page number with the newly-assigned physical page number
- add the physical page to the process private working set
- and dismiss the page fault, causing the instruction that raised the fault to be retried.
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
- the OS removes the page from the process working set.
- because it was written to since it was brought into the working set, it is put on the modified page list (otherwise it would go on the standby page list). The page table entry still reflects the physical page number of the page of RAM, but now has its "valid" bit clear, so the next time it's referenced a page fault will occur
- when the modified page list hits a small threshold, a modified page writer thread in the "System" process wakes up and saves the contents of modified pages to the pagefile (assuming that you have one), and...
- takes those pages off of the modified list and puts them on the standby list. They are now considered part of "available" RAM; but for now they still have their original contents from when they were in their respective processes. Again, commit charge doesn't change, but RAM usage and the process private working set will go down.
- Pages on the standby list can now be repurposed, which is to say used for something else - like resolve page faults from any process on the system, or used by SuperFetch. However...
- If a process that's lost a page to the modified or standby list tries to access it again before the physical page has been repurposed (i.e. it still has its original content), the page fault is resolved without reading from disk. The page is simply put back in the process working set and the page table entry is made "valid". This is an example of a "soft" or "cheap" page fault. We say that the standby and modified lists form a system-wide cache of pages that are likely to be needed again soon.
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
This it seems might be because of compression. Which transforms the question to: Why doesn't commit limit then go up or something? I.e. what's the point of compression if it doesn't help with memory usage?
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
When commit charge fills up and windows starts asking me to close things, most of the time physical memory is at around 60%. This seems horribly inefficient.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Can I forego artificially inflating my page file to levels my starved-for-space SSD is ill-equipped to handle just so I can actually effectively utilize my physical memory? (Or even if it wasn't as full. That is, I'd like to avoid suggestions like "Do X/Y/Z to your page file".)
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
- Increase your RAM.
- Increase your pagefile size.
- Run less stuff at one time.
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.