What is the difference between using INTXX_C macros and performing type cast to literals?

(uint64_t)1 is formally an int value 1 casted to uint64_t
1ul is a constant 1 of type unsigned long which is probably the same as uint64_t on a 64-bit system.
The macro is a portable way to specify the correct suffix for a constant (literal) of type uint64_t.

As you are dealing with constants, all calculations will be done by the compiler and the result is the same.

The suffix appended by the macro (ul, system specific) can be used for literal constants only.

The cast (uint64_t) can be used for both constant and variable values. With a constant it will have the same effect as the suffix/macro, with a variable of a different type it may perform a truncation or extension of the value, e.g. fill the higher bits with 0 when changing from 32 bits to 64 bits.

It's a matter of taste if you use UINT64_C(1) or (uint64_t)1. The macro makes it a bit more clear that you are dealing with a constant.


There is no obvious difference or advantage, these macros are kind of redundant. There are some minor, subtle differences between the cast and the macro:

  • (uintn_t)1 might be cumbersome to use for preprocessor purposes, whereas UINTN_C(1) expands into a single pp token.

  • The resulting type of the UINTN_C is actually uint_leastn_t and not uintn_t. So it is not necessarily the type you expected.

  • Static analysers for coding standards like MISRA-C might moan if you type 1 rather than 1u in your code, since shifting signed integers isn't a brilliant idea regardless of their size.
    (uint64_t)1u is MISRA compliant, UINT64_c(1) might not be, or at least the analyser won't be able to tell since it can't expand pp tokens like a compiler. And UINT64_C(1u) will likely not work, since this macro implementation probably looks something like this:

    #define UINT64_C(n) ((uint_least64_t) n ## ull)
    // BAD: 1u##ull = 1uull
    

In general, I would recommend to use an explicit cast. Or better yet wrap all of this inside a named constant:

#define MY_BIT ( (uint64_t)1u << 60 )