Int division: Why is the result of 1/3 == 0?

I was writing this code:

public static void main(String[] args) {
    double g = 1 / 3;
    System.out.printf("%.2f", g);
}

The result is 0. Why is this, and how do I solve this problem?


Solution 1:

The two operands (1 and 3) are integers, therefore integer arithmetic (division here) is used. Declaring the result variable as double just causes an implicit conversion to occur after division.

Integer division of course returns the true result of division rounded towards zero. The result of 0.333... is thus rounded down to 0 here. (Note that the processor doesn't actually do any rounding, but you can think of it that way still.)

Also, note that if both operands (numbers) are given as floats; 3.0 and 1.0, or even just the first, then floating-point arithmetic is used, giving you 0.333....

Solution 2:

1/3 uses integer division as both sides are integers.

You need at least one of them to be float or double.

If you are entering the values in the source code like your question, you can do 1.0/3 ; the 1.0 is a double.

If you get the values from elsewhere you can use (double) to turn the int into a double.

int x = ...;
int y = ...;
double value = ((double) x) / y;

Solution 3:

Explicitly cast it as a double

double g = 1.0/3.0

This happens because Java uses the integer division operation for 1 and 3 since you entered them as integer constants.

Solution 4:

Because you are doing integer division.

As @Noldorin says, if both operators are integers, then integer division is used.

The result 0.33333333 can't be represented as an integer, therefore only the integer part (0) is assigned to the result.

If any of the operators is a double / float, then floating point arithmetic will take place. But you'll have the same problem if you do that:

int n = 1.0 / 3.0;

Solution 5:

The easiest solution is to just do this

double g = (double) 1 / 3;

What this does, since you didn't enter 1.0 / 3.0, is let you manually convert it to data type double since Java assumed it was Integer division, and it would do it even if it meant narrowing the conversion. This is what is called a cast operator. Here we cast only one operand, and this is enough to avoid integer division (rounding towards zero)