Declaring floats, why default type double?

I am curious as to why float literals must be declared as so:

float f = 0.1f;

Instead of

float f = 0.1;

Why is the default type a double, why can't the compiler infer that it is a float from looking at the leftside of the assignment? Google only turns up explanation on what the default values are, not why they are so.


Why is the default type a double?

That's a question that would be best asked of the designers of the Java language. They are the only people who know the real reasons why that language design decision was made. But I expect that the reasoning was something along the following lines:

They needed to distinguish between the two types of literals because they do actually mean different values ... from a mathematical perspective.

Supposing they made "float" the default for literals, consider this example

// (Hypothetical "java" code ... )
double d = 0.1;
double d2 = 0.1d;

In the above, the d and d2 would actually have different values. In the first case, a low precision float value is converted to a higher precision double value at the point of assignment. But you cannot recover precision that isn't there.

I posit that a language design where those two statements are both legal, and mean different things is a BAD idea ... considering that the actual meaning of the first statement is different to the "natural" meaning.

By doing it the way they've done it:

double d = 0.1f;
double d2 = 0.1;

are both legal, and mean different things again. But in the first statement, the programmer's intention is clear, and the second statement the "natural" meaning is what the programmer gets. And in this case:

float f = 0.1f;
float f2 = 0.1;    // compilation error!

... the compiler picks up the mismatch.


I am guessing using floats is the exception and not the rule (using doubles instead) with modern hardware so at some point it would make sense to assume that the user intends 0.1f when he writes float f = 0.1;

They could do that already. But the problem is coming up with a set of type conversion rules that work ... and are simple enough that you don't need a degree in Java-ology to actually understand. Having 0.1 mean different things in different context would be confusing. And consider this:

void method(float f) { ... }
void method(double d) { ... }

// Which overload is called in the following?
this.method(1.0);

Programming language design is tricky. A change in one area can have consequences in others.


UPDATE to address some points raised by @supercat.

@supercat: Given the above overloads, which method will be invoked for method(16777217)? Is that the best choice?

I incorrectly commented ... compilation error. In fact the answer is method(float).

The JLS says this:

15.12.2.5. Choosing the Most Specific Method

If more than one member method is both accessible and applicable to a method invocation, it is necessary to choose one to provide the descriptor for the run-time method dispatch. The Java programming language uses the rule that the most specific method is chosen.

...

[The symbols m1 and m2 denote methods that are applicable.]

[If] m2 is not generic, and m1 and m2 are applicable by strict or loose invocation, and where m1 has formal parameter types S1, ..., Sn and m2 has formal parameter types T1, ..., Tn, the type Si is more specific than Ti for argument ei for all i (1 ≤ i ≤ n, n = k).

...

The above conditions are the only circumstances under which one method may be more specific than another.

A type S is more specific than a type T for any expression if S <: T (§4.10).

In this case, we are comparing method(float) and method(double) which are both applicable to the call. Since float <: double, it is more specific, and therefore method(float) will be selected.

@supercat: Such behavior may cause problems if e.g. an expression like int2 = (int) Math.Round(int1 * 3.5) or long2 = Math.Round(long1 * 3.5) gets replaced with int1 = (int) Math.Round(int2 * 3) or long2 = Math.Round(long1 * 3)

The change would look harmless, but the first two expressions are correct up to 613566756 or 2573485501354568 and the latter two fail above 5592405 [the last being completely bogus above 715827882].

If you are talking about a person making that change ... well yes.

However, the compiler won't make that change behind your back. For example, int1 * 3.5 has type double (the int is converted to a double), so you end up calling the Math.Round(double).

As a general rule, Java arithmetic will implicitly convert from "smaller" to "larger" numeric types, but not from "larger" to "smaller".

However, you do still need to be careful since (in your rounding example):

  • the product of a integer and floating point may not be representable with sufficient precision because (say) a float has fewer bits of precision than an int.

  • casting the result of Math.round(double) to an integer type can result in conversion to the smallest / largest value of the integer type.

But all of this illustrates that arithmetic support in a programming language is tricky, and there are inevitable gotcha's for a new or unwary programmer.


Ha, this is just the tip of the iceberg my friend.

Programmers coming from other languages certainly don't mind having to add a little F to a literal compared to:

SomeReallyLongClassName x = new SomeReallyLongClassName();

Pretty redundant, right?

It's true that you'd have to talk to core Java developers themselves to get more background. But as a pure surface-level explanation, one important concept to understand is what an expression is. In Java (I'm no expert so take this with a grain of salt), I believe at the compiler level your code is analyzed in terms of expressions; so:

float f

has a type, and

0.1f

also has a type (float).

Generally speaking, if you're going to assign one expression to another, the types must agree. There are a few very specific cases where this rule is relaxed (e.g., boxing a primitive like int in a reference type such as Integer); but in general it holds.

It might seem silly in this case, but here's a very similar case where it doesn't seem so silly:

double getDouble() {
    // some logic to return a double
}

void example() {
    float f = getDouble();
}

Now in this case, we can see that it makes sense for the compiler to spot something wrong. The value returned by getDouble will have 64 bits of precision whereas f can only contain 32 bits; so, without an explicit cast, it's possible the programmer has made a mistake.

These two scenarios are clearly different from a human point of view; but my point about expressions is that when code is first broken down into expressions and then analyzed, they are the same.

I'm sure the compiler authors could have written some not-so-clever logic to re-interpret literals based on the types of expressions they're assigned to; they simply didn't. Probably it wasn't considered worth the effort in comparison to other features.

For perspective, plenty of languages are able to do type inference; in C#, for example, you can do this:

var x = new SomeReallyLongClassName();

And the type of x will be inferred by the compiler based on that assignment.

For literals, though, C# is the same as Java in this respect.