Should you always use 'int' for numbers in C, even if they are non-negative?

One thing that hasn't been mentioned is that interchanging signed/unsigned numbers can lead to security bugs. This is a big issue, since many of the functions in the standard C-library take/return unsigned numbers (fread, memcpy, malloc etc. all take size_t parameters)

For instance, take the following innocuous example (from real code):

//Copy a user-defined structure into a buffer and process it
char* processNext(char* data, short length)
{
    char buffer[512];
    if (length <= 512) {
        memcpy(buffer, data, length);
        process(buffer);
        return data + length;
    } else {
        return -1;
    }
}

Looks harmless, right? The problem is that length is signed, but is converted to unsigned when passed to memcpy. Thus setting length to SHRT_MIN will validate the <= 512 test, but cause memcpy to copy more than 512 bytes to the buffer - this allows an attacker to overwrite the function return address on the stack and (after a bit of work) take over your computer!

You may naively be saying, "It's so obvious that length needs to be size_t or checked to be >= 0, I could never make that mistake". Except, I guarantee that if you've ever written anything non-trivial, you have. So have the authors of Windows, Linux, BSD, Solaris, Firefox, OpenSSL, Safari, MS Paint, Internet Explorer, Google Picasa, Opera, Flash, Open Office, Subversion, Apache, Python, PHP, Pidgin, Gimp, ... on and on and on ... - and these are all bright people whose job is knowing security.

In short, always use size_t for sizes.

Man, programming is hard.


Should I always ...

The answer to "Should I always ..." is almost certainly 'no', there are a lot of factors that dictate whether you should use a datatype- consistency is important.

But, this is a highly subjective question, it's really easy to mess up unsigneds:

for (unsigned int i = 10; i >= 0; i--);

results in an infinite loop.

This is why some style guides including Google's C++ Style Guide discourage unsigned data types.

In my personal opinion, I haven't run into many bugs caused by these problems with unsigned data types — I'd say use assertions to check your code and use them judiciously (and less when you're performing arithmetic).


Some cases where you should use unsigned integer types are:

  • You need to treat a datum as a pure binary representation.
  • You need the semantics of modulo arithmetic you get with unsigned numbers.
  • You have to interface with code that uses unsigned types (e.g. standard library routines that accept/return size_t values.

But for general arithmetic, the thing is, when you say that something "can't be negative," that does not necessarily mean you should use an unsigned type. Because you can put a negative value in an unsigned, it's just that it will become a really large value when you go to get it out. So, if you mean that negative values are forbidden, such as for a basic square root function, then you are stating a precondition of the function, and you should assert. And you can't assert that what cannot be, is; you need a way to hold out-of-band values so you can test for them (this is the same sort of logic behind getchar() returning an int and not char.)

Additionally, the choice of signed-vs.-unsigned can have practical repercussions on performance, as well. Take a look at the (contrived) code below:

#include <stdbool.h>

bool foo_i(int a) {
    return (a + 69) > a;
}

bool foo_u(unsigned int a)
{
    return (a + 69u) > a;
}

Both foo's are the same except for the type of their parameter. But, when compiled with c99 -fomit-frame-pointer -O2 -S, you get:

        .file   "try.c"
        .text
        .p2align 4,,15
.globl foo_i
        .type   foo_i, @function
foo_i:
        movl    $1, %eax
        ret
        .size   foo_i, .-foo_i
        .p2align 4,,15
.globl foo_u
        .type   foo_u, @function
foo_u:
        movl    4(%esp), %eax
        leal    69(%eax), %edx
        cmpl    %eax, %edx
        seta    %al
        ret
        .size   foo_u, .-foo_u
        .ident  "GCC: (Debian 4.4.4-7) 4.4.4"
        .section        .note.GNU-stack,"",@progbits

You can see that foo_i() is more efficient than foo_u(). This is because unsigned arithmetic overflow is defined by the standard to "wrap around," so (a + 69u) may very well be smaller than a if a is very large, and thus there must be code for this case. On the other hand, signed arithmetic overflow is undefined, so GCC will go ahead and assume signed arithmetic doesn't overflow, and so (a + 69) can't ever be less than a. Choosing unsigned types indiscriminately can therefore unnecessarily impact performance.