Integer arithmetic in Java with char and integer literal
It is because the compiler can check that it ('a' + 10
) is within the bounds of a char whereas it cannot (in general) check that 'a' + <an integer>
is within the bounds.
'a' + 10
is a compile-time constant expression with the value of 'k'
, which can initialise a variable of type char
. This is the same as being able to assign a byte
variable with a literal integer in [-128, 127]. A byte
in the range of [128, 255] may be more annoying.
char is actually an unsigned 16-bit integer with a range 0-65535. So you can assign any integer literal in that range to a char, e.g., "char c = 96", which results in "c" holding the character "a". You can print out the result using System.out.println(c).
For the constant expression on the right-hand-side of "char c = 'a' + 10", 'a' is promoted to int first because of the Java numeric promotion rules and the integer value is 96. After adding 10 to it, we get a literal integer 106, which can be assigned to a char.
The right-hand-side of "char c = 'a' + i" is not a constant expression and the expression result assignment rule requires an explicit cast from int to char, i.e., "char c = (char) ('a' + i)".
The constant is of a different type (I know the spec says that 10
should be an int, but the compiler doesn't see it that way).
In char c = 'a' + 10
, 10 is actually considered a constant variable of type char (so it can be added to a). Therefore char c = char + char
works.
In int i = 10;
char c = 'a' + i;
You are adding a char to an integer (an integer can be much bigger than a char, so it chooses the bigger data type [int
] to be the result a.k.a: 'a' + i = int + int
). So the result of the addition is an integer, which cannot fit into the char c
.
If you explicitly casted i
to be a char (e.g.: char c = 'a' + (char)i;
) it could work or if you did the opposite (e.g.: int c = (int)'a' + i;
) it would work.