If the size of "long" and "int" are the same on a platform - are "long" and "int" different in any way?

They are not compatible types, which you can see with a a simple example:

int* iptr;
long* lptr = iptr; // compiler error here

So it mostly matters when dealing with pointers to these types. Similarly, there is the "strict aliasing rule" which makes this code undefined behavior:

int i;
long* lptr = (long*)&i;
*lptr = ...;  // undefined behavior

Some another subtle issue is implicit promotion. In case you have some_int + some_long then the resulting type of that expression is long. Or in case either parameter is unsigned, unsigned long. This is because of integer promotion through the usual arithmetic conversions, see Implicit type promotion rules. Shouldn't matter most of the time, but code such as this will fail: _Generic(some_int + some_long, int: stuff() ) since there is no long clause in the expression.

Generally, when assigning values between types, there shouldn't be any problems. In case of uint32_t, it doesn't matter which type it corresponds to, because you should treat uint32_t as a separate type anyway. I'd pick long for compatibility with small microcontrollers, where typedef unsigned int uint32_t; will break. (And obviously, typedef signed long int32_t; for the signed equivalent.)


The types long and int have different ranks. The rank of the type long is higher than the rank of the type int. So in a binary expression where there are used an object of the type long and an object of the type int the last is always converted to the type long.

Compare the following code snippets.

int x = 0;
unsigned int y = 0;

the type of the expression x + y is unsigned int.

long x = 0;
unsigned int y = 0;

the type of the expression x + y is unsigned long (due to the usual arithmetic conversions) provided that sizeof( int ) is equal to sizeof( long).

This is very important in C++ than in C where function overloading are allowed.

In C you have to take this into account for example when you are using i/o functions as for example printf to specify a correct conversion specifier.


Even on platforms where long and int have the same representation, the Standard would allow compilers to be willfully blind to the possibility that the act of storing a value to a long* might affect the value of an int* or vice versa. Given something like:

#include <stdint.h>

void store_to_int32(void *p, int index)
{
    ((int32_t*)p)[index] = 2;
}
int array1[10];
int test1(int index)
{
    array1[0] = 1;
    store_to_int32(array1, index);
    return array1[0];
}
long array2[10];
long test2(int index)
{
    array2[0] = 1;
    store_to_int32(array2, index);
    return array2[0];
}

The 32-bit ARM version of gcc will treat int32_t as synonymous with long and ignore the possibility that passing the address of to array1 to store_to_int32 might cause the first element of that array to be written, and the 32-bit version of clang will treat int32_t as synonymous with int and ignore the possibility that passing the address of array2 to store_to_int32 might cause that array's first element to be written.

To be sure, nothing in the Standard would prohibit compilers from behaving in that fashion, but I think the Standard's failure to prohibit such blindness stems from the principle "the dumber something would be, the less need there should be to prohibit it".