Are memory leaks ever ok? [closed]
No.
As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.
I like to keep things simple. And the simple rule is that my program should have no memory leaks.
That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.
It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.
But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.
To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?
Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.
–
If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.
I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. Having some unreleased memory, while not ideal, is not a big problem unless the amount of memory required keeps growing.
Let's get our definitions correct, first. A memory leak is when memory is dynamically allocated, eg with malloc()
, and all references to the memory are lost without the corresponding free. An easy way to make one is like this:
#define BLK ((size_t)1024)
while(1){
void * vp = malloc(BLK);
}
Note that every time around the while(1) loop, 1024 (+overhead) bytes are allocated, and the new address assigned to vp; there's no remaining pointer to the previous malloc'ed blocks. This program is guaranteed to run until the heap runs out, and there's no way to recover any of the malloc'ed memory. Memory is "leaking" out of the heap, never to be seen again.
What you're describing, though, sound like
int main(){
void * vp = malloc(LOTS);
// Go do something useful
return 0;
}
You allocate the memory, work with it until the program terminates. This is not a memory leak; it doesn't impair the program, and all the memory will be scavenged up automagically when the program terminates.
Generally, you should avoid memory leaks. First, because like altitude above you and fuel back at the hangar, memory that has leaked and can't be recovered is useless; second, it's a lot easier to code correctly, not leaking memory, at the start than it is to find a memory leak later.
In theory no, in practise it depends.
It really depends on how much data the program is working on, how often the program is run and whether or not it is running constantly.
If I have a quick program that reads a small amount of data makes a calculation and exits, a small memory leak will never be noticed. Because the program is not running for very long and only uses a small amount of memory, the leak will be small and freed when the program exists.
On the other hand if I have a program that processes millions of records and runs for a long time, a small memory leak might bring down the machine given enough time.
As for third party libraries that have leaks, if they cause a problem either fix the library or find a better alternative. If it doesn't cause a problem, does it really matter?
Many people seem to be under the impression that once you free memory, it's instantly returned to the operating system and can be used by other programs.
This isn't true. Operating systems commonly manage memory in 4KiB pages. malloc
and other sorts of memory management get pages from the OS and sub-manage them as they see fit. It's quite likely that free()
will not return pages to the operating system, under the assumption that your program will malloc more memory later.
I'm not saying that free()
never returns memory to the operating system. It can happen, particularly if you are freeing large stretches of memory. But there's no guarantee.
The important fact: If you don't free memory that you no longer need, further mallocs are guaranteed to consume even more memory. But if you free first, malloc might re-use the freed memory instead.
What does this mean in practice? It means that if you know your program isn't going to require any more memory from now on (for instance it's in the cleanup phase), freeing memory is not so important. However if the program might allocate more memory later, you should avoid memory leaks - particularly ones that can occur repeatedly.
Also see this comment for more details about why freeing memory just before termination is bad.
A commenter didn't seem to understand that calling free()
does not automatically allow other programs to use the freed memory. But that's the entire point of this answer!
So, to convince people, I will demonstrate an example where free() does very little good. To make the math easy to follow, I will pretend that the OS manages memory in 4000 byte pages.
Suppose you allocate ten thousand 100-byte blocks (for simplicity I'll ignore the extra memory that would be required to manage these allocations). This consumes 1MB, or 250 pages. If you then free 9000 of these blocks at random, you're left with just 1000 blocks - but they're scattered all over the place. Statistically, about 5 of the pages will be empty. The other 245 will each have at least one allocated block in them. That amounts to 980KB of memory, that cannot possibly be reclaimed by the operating system - even though you now only have 100KB allocated!
On the other hand, you can now malloc() 9000 more blocks without increasing the amount of memory your program is tying up.
Even when free()
could technically return memory to the OS, it may not do so. free()
needs to achieve a balance between operating quickly and saving memory. And besides, a program that has already allocated a lot of memory and then freed it is likely to do so again. A web server needs to handle request after request after request - it makes sense to keep some "slack" memory available so you don't need to ask the OS for memory all the time.