Why do I get different results when I dereference a pointer after freeing it?

It is undefined behaviour, so it is an error to deference freed pointer as strange things may (and will) happen.

free() doesn't change the value of the pointer so it keeps pointing to the heap in the process address space - that's why you don't get segfault, however it is not specified and in theory on some platforms you can get segfault when you try to dereference pointer immediately after freeing.

To prevent this it is a good habit to assign pointer to NULL after freeing so it will fail in predictable way - segfault.

Please note that on some OSes (HP-UX, may be some others as well) it is allowed to dereference NULL pointer, just to prevent segfault (and thus hiding problems). I find it rather stupid as it makes things much more difficult to diagnose, although I don't know the full story behind this.


free() (and malloc()) are not from gcc. They come from the C library, which on Debian is usually glibc. So, what you are seeing is glibc's behavior, not gcc's (and would change with a different C library, or a different version of the C library).

I particular, after you use free() you are releasing the memory block malloc() gave you. It's not yours anymore. Since it is not supposed to be used anymore, the memory manager within glibc is free to do whatever it wants with the memory block, including using parts of it as its own memory structures (which is probably why you are seeing its contents change; they have been overwritten with bookkeeping information, probaly pointers to other blocks or counters of some sort).

There are other things that can happen; in particular, if the size of your allocation was large enough, glibc can ask the kernel for a separate memory block for it (with mmap() or similar calls), and release it back to the kernel during the free(). In that case, your program would crash. This can in theory also happen in some circunstances even with small allocations (glibc can grow/shrink the heap).


This is probably not the answer you are looking for, but I'll give it a try anyway:

Since you're playing with undefined behaviour that you should never depend on in any way, shape or form, what good does it do to know how exactly one given implementation handles that?

Since gcc is free to change that handling at any given time, between versions, architectures or depending on the position and brightness of the moon, there's no use whatsoever in knowing how it handles it right now. At least not to the developer that uses gcc.