max value of integer
In C, the integer (for 32 bit machine) is 32 bits, and it ranges from -32,768 to +32,767. In Java, the integer(long) is also 32 bits, but ranges from -2,147,483,648 to +2,147,483,647.
I do not understand how the range is different in Java, even though the number of bits is the same. Can someone explain this?
Solution 1:
In C, the language itself does not determine the representation of certain datatypes. It can vary from machine to machine, on embedded systems the int
can be 16 bit wide, though usually it is 32 bit.
The only requirement is that short int
<= int
<= long int
by size. Also, there is a recommendation that int
should represent the native capacity of the processor.
All types are signed. The unsigned
modifier allows you to use the highest bit as part of the value (otherwise it is reserved for the sign bit).
Here's a short table of the possible values for the possible data types:
width minimum maximum
signed 8 bit -128 +127
signed 16 bit -32 768 +32 767
signed 32 bit -2 147 483 648 +2 147 483 647
signed 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807
unsigned 8 bit 0 +255
unsigned 16 bit 0 +65 535
unsigned 32 bit 0 +4 294 967 295
unsigned 64 bit 0 +18 446 744 073 709 551 615
In Java, the Java Language Specification determines the representation of the data types.
The order is: byte
8 bits, short
16 bits, int
32 bits, long
64 bits. All of these types are signed, there are no unsigned versions. However, bit manipulations treat the numbers as they were unsigned (that is, handling all bits correctly).
The character data type char
is 16 bits wide, unsigned, and holds characters using UTF-16 encoding (however, it is possible to assign a char
an arbitrary unsigned 16 bit integer that represents an invalid character codepoint)
width minimum maximum
SIGNED
byte: 8 bit -128 +127
short: 16 bit -32 768 +32 767
int: 32 bit -2 147 483 648 +2 147 483 647
long: 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807
UNSIGNED
char 16 bit 0 +65 535
Solution 2:
In C, the integer(for 32 bit machine) is 32 bit and it ranges from -32768 to +32767.
Wrong. 32-bit signed integer in 2's complement representation has the range -231 to 231-1 which is equal to -2,147,483,648 to 2,147,483,647.
Solution 3:
A 32 bit integer ranges from -2,147,483,648 to 2,147,483,647. However the fact that you are on a 32-bit machine does not mean your C
compiler uses 32-bit integers.
Solution 4:
The C language definition specifies minimum ranges for various data types. For int
, this minimum range is -32767 to 32767, meaning an int
must be at least 16 bits wide. An implementation is free to provide a wider int
type with a correspondingly wider range. For example, on the SLES 10 development server I work on, the range is -2147483647 to 2137483647.
There are still some systems out there that use 16-bit int
types (All The World Is Not A VAX x86), but there are plenty that use 32-bit int
types, and maybe a few that use 64-bit.
The C language was designed to run on different architectures. Java was designed to run in a virtual machine that hides those architectural differences.
Solution 5:
The strict equivalent of the java int
is long int
in C.
Edit:
If int32_t
is defined, then it is the equivalent in terms of precision. long int
guarantee the precision of the java int
, because it is guarantee to be at least 32 bits in size.