dynamically allocated memory after program termination

I don't think that there are any guarantees in the language standard, but modern operating systems which support sparse virtual memory and memory protection (such as MacOS X, Linux, all recent version of Windows, and all currently manufactured phone handsets) automatically clean up after badly-behaved processes (when they terminate) and free the memory for you. The memory remains unavailable, however as long as the program is running.

If you're programming on microcontrollers, on MacOS 9 or earler, DOS, or Windows 3.x, then you might need to be concerned about memory leaks making memory permenantly unavailable to the whole operating system.


Most modern operating systems employ a memory manager, and all userland processes only see so-called virtual memory, which is not related to actual system memory in a way that the program could inspect. This means that programs cannot simply read another process's memory or kernel memory. It also means that the memory manager will completely "free" all memory that has been assigned to a process when that process terminates, so that memory leaks within the program do not usually "affect" the rest of the system (other than perhaps forcing a huge amount of disk swapping and perhaps some "out of memory" behaviour).

This doesn't mean that it's in any way OK to treat memory leaks light-heartedly, it only means that no single program can casually corrupt other processes on modern multi-tasking operating systems (deliberate abuse of administrative privileges notwithstanding, of course).