The real difference between "int" and "unsigned int"

Solution 1:

Hehe. You have an implicit cast here, because you're telling printf what type to expect.

Try this on for size instead:

unsigned int x = 0xFFFFFFFF;
int y = 0xFFFFFFFF;

if (x < 0)
    printf("one\n");
else
    printf("two\n");
if (y < 0)
    printf("three\n");
else
    printf("four\n");

Solution 2:

Yes, because in your case they use the same representation.

The bit pattern 0xFFFFFFFF happens to look like -1 when interpreted as a 32b signed integer and as 4294967295 when interpreted as a 32b unsigned integer.

It's the same as char c = 65. If you interpret it as a signed integer, it's 65. If you interpret it as a character it's a.


As R and pmg point out, technically it's undefined behavior to pass arguments that don't match the format specifiers. So the program could do anything (from printing random values to crashing, to printing the "right" thing, etc).

The standard points it out in 7.19.6.1-9

If a conversion specification is invalid, the behavior is undefined. If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.