What is the difference between intXX_t and int_fastXX_t?
In the C99 Standard, 7.18.1.3 Fastest minimum-width integer types.
(7.18.1.3p1) "Each of the following types designates an integer type that is usually fastest225) to operate with among all integer types that have at least the specified width."
225) "The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."
and
(7.18.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."
The types int_fastN_t
and uint_fastN_t
are counterparts to the exact-width integer types intN_t
and uintN_t
. The implementation guarantees that they take at least N
bits, but the implementation can take more bits if it can perform optimization using larger types; it just guarantees they take at least N
bits.
For example, on a 32-bit machine, uint_fast16_t
could be defined as an unsigned int
rather than as an unsigned short
because working with types of machine word size would be more efficent.
Another reason of their existence is the exact-width integer types are optional in C but the fastest minimum-width integer types and the minimum-width integer types (int_leastN_t
and uint_leastN_t
) are required.
Gnu libc defines {int,uint}_fast{16,32}_t as 64-bit when compiling for 64-bit CPUs and 32-bit otherwise. Operations on 64-bit integers are faster on Intel and AMD 64-bit x86 CPUs than the same operations on 32-bit integers.
There will probably not be a difference except on exotic hardware where int32_t
and int16_t
don't even exist.
In that case you might use int_least16_t
to get the smallest type that can contain 16 bits. Could be important if you want to conserve space.
On the other hand, using int_fast16_t
might get you another type, larger than int_least16_t
but possibly faster for "typical" integer use. The implementation will have to consider what is faster and what is typical. Perhaps this is obvious for some special purpose hardware?
On most common machines these 16-bit types will all be a typedef for short
, and you don't have to bother.