Is it ever OK to *not* use free() on allocated memory?

Solution 1:

Easy: just read the source of pretty much any half-serious malloc()/free() implementation. By this, I mean the actual memory manager that handles the work of the calls. This might be in the runtime library, virtual machine, or operating system. Of course the code is not equally accessible in all cases.

Making sure memory is not fragmented, by joining adjacent holes into larger holes, is very very common. More serious allocators use more serious techniques to ensure this.

So, let's assume you do three allocations and de-allocations and get blocks layed out in memory in this order:

+-+-+-+
|A|B|C|
+-+-+-+

The sizes of the individual allocations don't matter. then you free the first and last one, A and C:

+-+-+-+
| |B| |
+-+-+-+

when you finally free B, you (initially, at least in theory) end up with:

+-+-+-+
| | | |
+-+-+-+

which can be de-fragmented into just

+-+-+-+
|     |
+-+-+-+

i.e. a single larger free block, no fragments left.

References, as requested:

  • Try reading the code for dlmalloc. I's a lot more advanced, being a full production-quality implementation.
  • Even in embedded applications, de-fragmenting implementations are available. See for instance these notes on the heap4.c code in FreeRTOS.

Solution 2:

The other answers already explain perfectly well that real implementations of malloc() and free() do indeed coalesce (defragmnent) holes into larger free chunks. But even if that wasn't the case, it would still be a bad idea to forego free().

The thing is, your program just allocated (and wants to free) those 4 bytes of memory. If it's going to run for an extended period of time, it's quite likely that it will need to allocate just 4 bytes of memory again. So even if those 4 bytes will never coalesce into a larger contiguous space, they can still be re-used by the program itself.