Why does int i = 1024 * 1024 * 1024 * 1024 compile without error?

There's nothing wrong with that statement; you're just multiplying 4 numbers and assigning it to an int, there just happens to be an overflow. This is different than assigning a single literal, which would be bounds-checked at compile-time.

It is the out-of-bounds literal that causes the error, not the assignment:

System.out.println(2147483648);        // error
System.out.println(2147483647 + 1);    // no error

By contrast a long literal would compile fine:

System.out.println(2147483648L);       // no error

Note that, in fact, the result is still computed at compile-time because 1024 * 1024 * 1024 * 1024 is a constant expression:

int i = 1024 * 1024 * 1024 * 1024;

becomes:

   0: iconst_0      
   1: istore_1      

Notice that the result (0) is simply loaded and stored, and no multiplication takes place.


From JLS §3.10.1 (thanks to @ChrisK for bringing it up in the comments):

It is a compile-time error if a decimal literal of type int is larger than 2147483648 (231), or if the decimal literal 2147483648 appears anywhere other than as the operand of the unary minus operator (§15.15.4).


1024 * 1024 * 1024 * 1024 and 2147483648 do not have the same value in Java.

Actually, 2147483648 ISN'T EVEN A VALUE(although 2147483648L is) in Java. The compiler literally does not know what it is, or how to use it. So it whines.

1024 is a valid int in Java, and a valid int multiplied by another valid int, is always a valid int. Even if it's not the same value that you would intuitively expect because the calculation will overflow.

Example

Consider the following code sample:

public static void main(String[] args) {
    int a = 1024;
    int b = a * a * a * a;
}

Would you expect this to generate a compile error? It becomes a little more slippery now.
What if we put a loop with 3 iterations and multiplied in the loop?

The compiler is allowed to optimize, but it can't change the behaviour of the program while it's doing so.


Some info on how this case is actually handled:

In Java and many other languages, integers will consist of a fixed number of bits. Calculations that don't fit in the given number of bits will overflow; the calculation is basically performed modulus 2^32 in Java, after which the value is converted back into a signed integer.

Other languages or API's use a dynamic number of bits (BigInteger in Java), raise an exception or set the value to a magic value such as not-a-number.


I have no idea why the second variant produces no error.

The behaviour that you suggest -- that is, the production of diagnostic message when a computation produces a value that is larger than the largest value that can be stored in an integer -- is a feature. For you to use any feature, the feature must be thought of, considered to be a good idea, designed, specified, implemented, tested, documented and shipped to users.

For Java, one or more of the things on that list did not happen, and therefore you don't have the feature. I don't know which one; you'd have to ask a Java designer.

For C#, all of those things did happen -- about fourteen years ago now -- and so the corresponding program in C# has produced an error since C# 1.0.


In addition to arshajii's answer I want to show one more thing:

It is not the assignment that causes the error but simply the use of the literal. When you try

long i = 2147483648;

you'll notice it also causes a compile-error since the right hand side still is an int-literal and out of range.

So operations with int-values (and that's including assignments) may overflow without a compile-error (and without a runtime-error as well), but the compiler just can't handle those too-large literals.


A: Because it is not an error.

Background: The multiplication 1024 * 1024 * 1024 * 1024 will lead to an overflow. An overflow is very often a bug. Different programming languages produce different behavior when overflows happen. For example, C and C++ call it "undefined behavior" for signed integers, and the behavior is defined unsigned integers (take the mathematical result, add UINT_MAX + 1 as long as the result is negative, subtract UINT_MAX + 1 as long as the result is greater than UINT_MAX).

In the case of Java, if the result of an operation with int values is not in the allowed range, conceptually Java adds or subtracts 2^32 until the result is in the allowed range. So the statement is completely legal and not in error. It just doesn't produce the result that you may have hoped for.

You can surely argue whether this behavior is helpful, and whether the compiler should give you a warning. I'd say personally that a warning would be very useful, but an error would be incorrect since it is legal Java.