Loss of precision - int -> float or double

I have an exam question I am revising for and the question is for 4 marks.

"In java we can assign a int to a double or a float". Will this ever lose information and why?

I have put that because ints are normally of fixed length or size - the precision for storing data is finite, where storing information in floating point can be infinite, essentially we lose information because of this

Now I am a little sketchy as to whether or not I am hitting the right areas here. I very sure it will lose precision but I can't exactly put my finger on why. Can I get some help, please?


In Java Integer uses 32 bits to represent its value.

In Java a FLOAT uses a 23 bit mantissa, so integers greater than 2^23 will have their least significant bits truncated. For example 33554435 (or 0x200003) will be truncated to around 33554432 +/- 4

In Java a DOUBLE uses a 52 bit mantissa, so will be able to represent a 32bit integer without lost of data.

See also "Floating Point" on wikipedia


It's not necessary to know the internal layout of floating-point numbers. All you need is the pigeonhole principle and the knowledge that int and float are the same size.

  • int is a 32-bit type, for which every bit pattern represents a distinct integer, so there are 2^32 int values.
  • float is a 32-bit type, so it has at most 2^32 distinct values.
  • Some floats represent non-integers, so there are fewer than 2^32 float values that represent integers.
  • Therefore, different int values will be converted to the same float (=loss of precision).

Similar reasoning can be used with long and double.