Why doesn't my program crash when I write past the end of an array?

Why does the code below work without any crash @ runtime ?

And also the size is completely dependent on machine/platform/compiler!!. I can even give upto 200 in a 64-bit machine. how would a segmentation fault in main function get detected in the OS?

int main(int argc, char* argv[])
{
    int arr[3];
    arr[4] = 99;
}

Where does this buffer space come from? Is this the stack allocated to a process ?


Something I wrote sometime ago for education-purposes...

Consider the following c-program:

int q[200];

main(void) {
    int i;
    for(i=0;i<2000;i++) {
        q[i]=i;
    }
}

after compiling it and executing it, a core dump is produced:

$ gcc -ggdb3 segfault.c
$ ulimit -c unlimited
$ ./a.out
Segmentation fault (core dumped)

now using gdb to perform a post mortem analysis:

$ gdb -q ./a.out core
Program terminated with signal 11, Segmentation fault.
[New process 7221]
#0  0x080483b4 in main () at s.c:8
8       q[i]=i;
(gdb) p i
$1 = 1008
(gdb)

huh, the program didn’t segfault when one wrote outside the 200 items allocated, instead it crashed when i=1008, why?

Enter pages.

One can determine the page size in several ways on UNIX/Linux, one way is to use the system function sysconf() like this:

#include <stdio.h>
#include <unistd.h> // sysconf(3)

int main(void) {
    printf("The page size for this system is %ld bytes.\n",
            sysconf(_SC_PAGESIZE));

    return 0;
}

which gives the output:

The page size for this system is 4096 bytes.

or one can use the commandline utility getconf like this:

$ getconf PAGESIZE
4096

post mortem

It turns out that the segfault occurs not at i=200 but at i=1008, lets figure out why. Start gdb to do some post mortem ananlysis:

$gdb -q ./a.out core

Core was generated by `./a.out'.
Program terminated with signal 11, Segmentation fault.
[New process 4605]
#0  0x080483b4 in main () at seg.c:6
6           q[i]=i;
(gdb) p i
$1 = 1008
(gdb) p &q
$2 = (int (*)[200]) 0x804a040
(gdb) p &q[199]
$3 = (int *) 0x804a35c

q ended at at address 0x804a35c, or rather, the last byte of q[199] was at that location. The page size is as we saw earlier 4096 bytes and the 32-bit word size of the machine gives that an virtual address breaks down into a 20-bit page number and a 12-bit offset.

q[] ended in virtual page number:

0x804a = 32842 offset:

0x35c = 860 so there were still:

4096 - 864 = 3232 bytes left on that page of memory on which q[] was allocated. That space can hold:

3232 / 4 = 808 integers, and the code treated it as if it contained elements of q at position 200 to 1008.

We all know that those elements don’t exists and the compiler didn’t complain, neither did the hw since we have write permissions to that page. Only when i=1008 did q[] refer to an address on a different page for which we didn’t have write permission, the virtual memory hw detected this and triggered a segfault.

An integer is stored in 4 bytes, meaning that this page contains 808 (3236/4) additional fake elements meaning that it is still perfectly legal to access these elements from q[200], q[201] all the way up to element 199+808=1007 (q[1007]) without triggering a seg fault. When accessing q[1008] you enter a new page for which the permission are different.


Since you're writing outside the boundaries of your array, the behaviour of your code in undefined.

It is the nature of undefined behaviour that anything can happen, including lack of segfaults (the compiler is under no obligation to perform bounds checking).

You're writing to memory you haven't allocated but that happens to be there and that -- probably -- is not being used for anything else. Your code might behave differently if you make changes to seemingly unrelated parts of the code, to your OS, compiler, optimization flags etc.

In other words, once you're in that territory, all bets are off.