Is killing a process still considered bad for memory management?

It's been a long time since I learned this stuff, but here goes.

When an operating system launches a process, it assigns it pages from the virtual memory table. The operating system is responsible for maintaining a map from the virtual memory table to real memory or to the swap space on disk. When a process gets killed, the OS doesn't just stop giving it CPU cycles. It does a few cleanup items, one of which is to mark all of its memory pages as free. This allows them to be reused by other applications. The OS will probably also clean up any resource handles that the process had, automatically closing files, network connections, process-to-process pipes, and so on. This process is entirely under the control of the OS, and these steps are going to be taken no matter how the process died.

Bear in mind all of this applies to operating system processes. If you have some manner of virtual machine, and it's running multiple virtual processes at once, then the VM is responsible for deciding how to allocate and deallocate to them. From the OS, however, it still looks like one process. So in this one case, if you have a VM that runs multiple processes, and you kill one of them within the VM, then you probably won't immediately get memory back in the host OS. But you'll get it back in the VM. If you kill the VM within the operating system, however, then the OS is going to kill the VM (which indirectly kills the VM's processes) and reclaim all of the memory (which will not need to go through a garbage collector, free(), delete, or anything else).

Highly speculative:

If .NET runs as a virtual machine with more than one .NET app on the same VM, then .NET could hold on to the memory that isn't GCd yet, until it runs the GC, and Windows would think that .NET is using more than it really is. (And if MS was really slick, Windows could then tell .NET to GC in tight memory situations, but there's almost no point to that because that's what disk swap space is for.)

If .NET did work that way, then the OS would still think of it as one process for OS purposes, which takes responsibility for deciding what to keep and what to throw away, and it's not normally Windows's problem to tell a process that it needs to start deallocating memory. At which point it's conceivable that MS would build a special API just for .NET so that .NET processes look like Windows processes, except that they aren't, which is why people might think that the process memory was not getting deallocated. It is, really; it's just that you're looking at the wrong process.

I don't know enough about .NET to say that it actually does work that way; the Java VM certainly doesn't.

End of speculation.

EDIT: Insofar as killing a process being bad for memory management is concerned, that requires multiple processes to allocate out of the same pool (i.e. they're more like threads than real processes) and for the memory to not be freed after the process is killed. That would almost require a cooperative multitasking system, because virtual memory and pre-emptive multitasking were, to my knowledge, usually implemented together (the VM making it possible to isolate processes from each other and keep them from stomping on each other's memory). Having virtual memory makes it trivial to clean up after a process on the OS level; you just move all the pages from the process's pool to the free pool.


No problem in my experience, kill away.

As an example, if you have 4 GB of RAM, 3 GB of which is in use by a game and you kill the game process, you can restart the game without issues and it'll have 3 GB of RAM on the process again.


The operating systems listed in the question tags (Windows and OS X) implement virtual memory, where each process is given its own address space, which is then mapped to physical memory by the OS. These mapping tables are used in cleaning up memory allocations when a process is terminated, so memory is completely freed. Physical pages may be shared among multiple processes, in which case they are freed when there are no more users.

Typically, other resources such as file handles are given to processes in the form of capabilities, where the process receives a handle on the resource and manipulates it through well-defined access functions. The OS keeps a table mapping from the handle value to the in-kernel object providing the function; again, this table can be used for cleanup when a process is terminated.

There are special resources that survive the process that created them, for example, it is possible to create persistent named shared memory allocations that can be used in interprocess communication. These are seldomly used, precisely because the OS cannot determine whether they are still required.

In other operating systems, there is sometimes no clear process separation; this places the burden of cleanup on individual applications.

Forcibly closing a process will terminate the process without giving it any chance to clean up; if the OS has a complete list of all resources, this has no adverse effects.