Why does this random value have a 25/75 distribution instead of 50/50?

Edit: So basically what I'm trying to write is a 1 bit hash for double.

I want to map a double to true or false with a 50/50 chance. For that I wrote code that picks some random numbers (just as an example, I want to use this on data with regularities and still get a 50/50 result), checks their last bit and increments y if it is 1, or n if it is 0.

However, this code constantly results in 25% y and 75% n. Why is it not 50/50? And why such a weird, but straight-forward (1/3) distribution?

public class DoubleToBoolean {
    @Test
    public void test() {

        int y = 0;
        int n = 0;
        Random r = new Random();
        for (int i = 0; i < 1000000; i++) {
            double randomValue = r.nextDouble();
            long lastBit = Double.doubleToLongBits(randomValue) & 1;
            if (lastBit == 1) {
                y++;
            } else {
                n++;
            }
        }
        System.out.println(y + " " + n);
    }
}

Example output:

250167 749833

Because nextDouble works like this: (source)

public double nextDouble()
{
    return (((long) next(26) << 27) + next(27)) / (double) (1L << 53);
}

next(x) makes x random bits.

Now why does this matter? Because about half the numbers generated by the first part (before the division) are less than 1L << 52, and therefore their significand doesn't entirely fill the 53 bits that it could fill, meaning the least significant bit of the significand is always zero for those.


Because of the amount of attention this is receiving, here's some extra explanation of what a double in Java (and many other languages) really looks like and why it mattered in this question.

Basically, a double looks like this: (source)

double layout

A very important detail not visible in this picture is that numbers are "normalized"1 such that the 53 bit fraction starts with a 1 (by choosing the exponent such that it is so), that 1 is then omitted. That is why the picture shows 52 bits for the fraction (significand) but there are effectively 53 bits in it.

The normalization means that if in the code for nextDouble the 53rd bit is set, that bit is the implicit leading 1 and it goes away, and the other 52 bits are copied literally to the significand of the resulting double. If that bit is not set however, the remaining bits must be shifted left until it becomes set.

On average, half the generated numbers fall into the case where the significand was not shifted left at all (and about half those have a 0 as their least significant bit), and the other half is shifted by at least 1 (or is just completely zero) so their least significant bit is always 0.

1: not always, clearly it cannot be done for zero, which has no highest 1. These numbers are called denormal or subnormal numbers, see wikipedia:denormal number.


From the docs:

The method nextDouble is implemented by class Random as if by:

public double nextDouble() {
  return (((long)next(26) << 27) + next(27))
      / (double)(1L << 53);
}

But it also states the following (emphasis mine):

[In early versions of Java, the result was incorrectly calculated as:

 return (((long)next(27) << 27) + next(27))
     / (double)(1L << 54);

This might seem to be equivalent, if not better, but in fact it introduced a large nonuniformity because of the bias in the rounding of floating-point numbers: it was three times as likely that the low-order bit of the significand would be 0 than that it would be 1! This nonuniformity probably doesn't matter much in practice, but we strive for perfection.]

This note has been there since Java 5 at least (docs for Java <= 1.4 are behind a loginwall, too lazy to check). This is interesting, because the problem apparently still exists even in Java 8. Perhaps the "fixed" version was never tested?


This result doesn't surprise me given how floating-point numbers are represented. Let's suppose we had a very short floating-point type with only 4 bits of precision. If we were to generate a random number between 0 and 1, distributed uniformly, there would be 16 possible values:

0.0000
0.0001
0.0010
0.0011
0.0100
...
0.1110
0.1111

If that's how they looked in the machine, you could test the low-order bit to get a 50/50 distribution. However, IEEE floats are represented as a power of 2 times a mantissa; one field in the float is the power of 2 (plus a fixed offset). The power of 2 is selected so that the "mantissa" part is always a number >= 1.0 and < 2.0. This means that, in effect, the numbers other than 0.0000 would be represented like this:

0.0001 = 2^(-4) x 1.000
0.0010 = 2^(-3) x 1.000
0.0011 = 2^(-3) x 1.100
0.0100 = 2^(-2) x 1.000
... 
0.0111 = 2^(-2) x 1.110
0.1000 = 2^(-1) x 1.000
0.1001 = 2^(-1) x 1.001
...
0.1110 = 2^(-1) x 1.110
0.1111 = 2^(-1) x 1.111

(The 1 before the binary point is an implied value; for 32- and 64-bit floats, no bit is actually allocated to hold this 1.)

But looking at the above should demonstrate why, if you convert the representation to bits and look at the low bit, you will get zero 75% of the time. This is due to all values less than 0.5 (binary 0.1000), which is half the possible values, having their mantissas shifted over, causing 0 to appear in the low bit. The situation is essentially the same when the mantissa has 52 bits (not including the implied 1) as a double does.

(Actually, as @sneftel suggested in a comment, we could include more than 16 possible values in the distribution, by generating:

0.0001000 with probability 1/128
0.0001001 with probability 1/128
...
0.0001111 with probability 1/128
0.001000  with probability 1/64
0.001001  with probability 1/64
...
0.01111   with probability 1/32 
0.1000    with probability 1/16
0.1001    with probability 1/16
...
0.1110    with probability 1/16
0.1111    with probability 1/16

But I'm not sure it's the kind of distribution most programmers would expect, so it probably isn't worthwhile. Plus it doesn't gain you much when the values are used to generate integers, as random floating-point values often are.)