What is uint_fast32_t and why should it be used instead of the regular int and uint32_t?
So the reason for typedef
:ed primitive data types is to abstract the low-level representation and make it easier to comprehend (uint64_t
instead of long long
type, which is 8 bytes).
However, there is uint_fast32_t
which has the same typedef
as uint32_t
. Will using the "fast" version make the program faster?
Solution 1:
-
int
may be as small as 16 bits on some platforms. It may not be sufficient for your application. -
uint32_t
is not guaranteed to exist. It's an optionaltypedef
that the implementation must provide iff it has an unsigned integer type of exactly 32-bits. Some have a 9-bit bytes for example, so they don't have auint32_t
. -
uint_fast32_t
states your intent clearly: it's a type of at least 32 bits which is the best from a performance point-of-view.uint_fast32_t
may be in fact 64 bits long. It's up to the implementation.
... there is
uint_fast32_t
which has the same typedef asuint32_t
...
What you are looking at is not the standard. It's a particular implementation (BlackBerry). So you can't deduce from there that uint_fast32_t
is always the same as uint32_t
.
See also:
Exotic architectures the standards committees care about.
My opinion-based pragmatic view of integer types in C and C++.
Solution 2:
The difference lies in their exact-ness and availability.
The doc here says:
unsigned integer type with width of exactly 8, 16, 32 and 64 bits respectively (provided only if the implementation directly supports the type):
uint8_t uint16_t uint32_t uint64_t
And
fastest unsigned unsigned integer type with width of at least 8, 16, 32 and 64 bits respectively
uint_fast8_t uint_fast16_t uint_fast32_t uint_fast64_t
So the difference is pretty much clear that uint32_t
is a type which has exactly 32
bits, and an implementation should provide it only if it has type with exactly 32 bits, and then it can typedef that type as uint32_t
. This means, uint32_t
may or may not be available.
On the other hand, uint_fast32_t
is a type which has at least 32 bits, which also means, if an implementation may typedef uint32_t
as uint_fast32_t
if it provides uint32_t
. If it doesn't provide uint32_t
, then uint_fast32_t
could be a typedef of any type which has at least 32
bits.
Solution 3:
When you #include inttypes.h
in your program, you get access to a bunch of different ways for representing integers.
The uint_fast*_t type simply defines the fastest type for representing a given number of bits.
Think about it this way: you define a variable of type short
and use it several times in the program, which is totally valid. However, the system you're working on might work more quickly with values of type int
. By defining a variable as type uint_fast*t
, the computer simply chooses the most efficient representation that it can work with.
If there is no difference between these representations, then the system chooses whichever one it wants, and uses it consistently throughout.