Why does partially full RAM cause lag?
Solution 1:
There is much involved here but I will try to explain it as simply as I can and in a way applicable to just about any OS.
There are 2 basic principles here:
The sum total of everything that needs to be in RAM and those things that would benefit from being in RAM is almost always greater than the size of RAM. Things that would benefit from being in RAM include process working sets and the standby list. The latter contains data and code that was once in active use but has since lapsed into inactivity. Much of this will be used again, some of it quite soon, so it is beneficial to keep this in RAM. This memory acts as a kind of cache but is not really essential so is in the category of available memory. Like free memory it can be quickly given to any program that needs it. In the interests of performance standby memory should be large.
The frequency of use of memory blocks is far from random but can be predicted with considerable accuracy. Memory is divided into blocks, often 4K bytes. Some blocks are accessed many times per second while others have not been accessed for many minutes, hours, days, or even weeks if the system has been up long enough. There is a wide range of usage between these 2 extremes. The memory manager knows which blocks have been accessed recently and those that have not. It is a reasonable assumption that a memory block that has been accessed recently will be needed again soon. Memory that has not been accessed recently probably won't be needed anytime soon. Long experience has proven this to be a valid principle.
The memory manager takes advantage of the second principle to largely mitigate the undesirable consequences of the first. To do this it does a balancing act of keeping recently accessed data in RAM while that keeping rarely used data in the original files or the pagefile.
When RAM is plentiful this balancing act is easy. Much of the not so recently used data can be kept in RAM. This is a good situation.
Things get more complicated when the workload increases. The the sum total of data and code in use is larger but the size of RAM remains the same. This means that a smaller subset of this can be kept in RAM. Some of the less recently used data can no longer be in RAM but must be left on disk. The memory manager tries very hard to maintain a good balance between memory in active use and available memory. But as the workload increases the memory manager will be forced to give more available memory to running processes. This is not a good situation but the memory manager has no choice.
The problem is that moving data to and from RAM as programs run takes time. When RAM is plentiful it won't happen very often and won't even be noticed. But when RAM usage reaches high levels it will happen much more often. The situation can become so bad that more time is spent moving data to and from RAM than is spent in actually using it. This is thrashing, a thing the memory manager tries very hard to avoid but with a high workload it often cannot be avoided.
The memory manger is on your side, always trying it's best to maintain optimum performance even under adverse conditions. But when the workload is great and available memory runs short it must do bad things in order to keep functioning. That is in fact the most important thing. The priority is first to keep things running then make then as fast as possible.
Solution 2:
All modern operating systems use otherwise unused memory for caching data so that it can be accessed from fast RAM instead of slower storage. They will generally report this as free memory, since applications can clear the cache and use it if they need to, but it's still actually being used. The less of it there is, the less data can be cached, and the slower the computer will be.