Printing double without losing precision

How do you print a double to a stream so that when it is read in you don't lose precision?

I tried:

std::stringstream ss;

double v = 0.1 * 0.1;
ss << std::setprecision(std::numeric_limits<T>::digits10) << v << " ";

double u;
ss >> u;
std::cout << "precision " << ((u == v) ? "retained" : "lost") << std::endl;

This did not work as I expected.

But I can increase precision (which surprised me as I thought that digits10 was the maximum required).

ss << std::setprecision(std::numeric_limits<T>::digits10 + 2) << v << " ";
                                                 //    ^^^^^^ +2

It has to do with the number of significant digits and the first two don't count in (0.01).

So has anybody looked at representing floating point numbers exactly? What is the exact magical incantation on the stream I need to do?

After some experimentation:

The trouble was with my original version. There were non-significant digits in the string after the decimal point that affected the accuracy.

So to compensate for this we can use scientific notation to compensate:

ss << std::scientific
   << std::setprecision(std::numeric_limits<double>::digits10 + 1)
   << v;

This still does not explain the need for the +1 though.

Also if I print out the number with more precision I get more precision printed out!

std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits10) << v << "\n";
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits10 + 1) << v << "\n";
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits) << v << "\n";

It results in:

1.000000000000000e-02
1.0000000000000002e-02
1.00000000000000019428902930940239457413554200000000000e-02

Based on @Stephen Canon answer below:

We can print out exactly by using the printf() formatter, "%a" or "%A". To achieve this in C++ we need to use the fixed and scientific manipulators (see n3225: 22.4.2.2.2p5 Table 88)

std::cout.flags(std::ios_base::fixed | std::ios_base::scientific);
std::cout << v;

For now I have defined:

template<typename T>
std::ostream& precise(std::ostream& stream)
{
    std::cout.flags(std::ios_base::fixed | std::ios_base::scientific);
    return stream;
}

std::ostream& preciselngd(std::ostream& stream){ return precise<long double>(stream);}
std::ostream& precisedbl(std::ostream& stream) { return precise<double>(stream);}
std::ostream& preciseflt(std::ostream& stream) { return precise<float>(stream);}

Next: How do we handle NaN/Inf?


It's not correct to say "floating point is inaccurate", although I admit that's a useful simplification. If we used base 8 or 16 in real life then people around here would be saying "base 10 decimal fraction packages are inaccurate, why did anyone ever cook those up?".

The problem is that integral values translate exactly from one base into another, but fractional values do not, because they represent fractions of the integral step and only a few of them are used.

Floating point arithmetic is technically perfectly accurate. Every calculation has one and only one possible result. There is a problem, and it is that most decimal fractions have base-2 representations that repeat. In fact, in the sequence 0.01, 0.02, ... 0.99, only a mere 3 values have exact binary representations. (0.25, 0.50, and 0.75.) There are 96 values that repeat and therefore are obviously not represented exactly.

Now, there are a number of ways to write and read back floating point numbers without losing a single bit. The idea is to avoid trying to express the binary number with a base 10 fraction.

  • Write them as binary. These days, everyone implements the IEEE-754 format so as long as you choose a byte order and write or read only that byte order, then the numbers will be portable.
  • Write them as 64-bit integer values. Here you can use the usual base 10. (Because you are representing the 64-bit aliased integer, not the 52-bit fraction.)

You can also just write more decimal fraction digits. Whether this is bit-for-bit accurate will depend on the quality of the conversion libraries and I'm not sure I would count on perfect accuracy (from the software) here. But any errors will be exceedingly small and your original data certainly has no information in the low bits. (None of the constants of physics and chemistry are known to 52 bits, nor has any distance on earth ever been measured to 52 bits of precision.) But for a backup or restore where bit-for-bit accuracy might be compared automatically, this obviously isn't ideal.


Don't print floating-point values in decimal if you don't want to lose precision. Even if you print enough digits to represent the number exactly, not all implementations have correctly-rounded conversions to/from decimal strings over the entire floating-point range, so you may still lose precision.

Use hexadecimal floating point instead. In C:

printf("%a\n", yourNumber);

C++0x provides the hexfloat manipulator for iostreams that does the same thing (on some platforms, using the std::hex modifier has the same result, but this is not a portable assumption).

Using hex floating point is preferred for several reasons.

First, the printed value is always exact. No rounding occurs in writing or reading a value formatted in this way. Beyond the accuracy benefits, this means that reading and writing such values can be faster with a well tuned I/O library. They also require fewer digits to represent values exactly.


I got interested in this question because I'm trying to (de)serialize my data to & from JSON.

I think I have a clearer explanation (with less hand waiving) for why 17 decimal digits are sufficient to reconstruct the original number losslessly:

enter image description here

Imagine 3 number lines:
1. for the original base 2 number
2. for the rounded base 10 representation
3. for the reconstructed number (same as #1 because both in base 2)

When you convert to base 10, graphically, you choose the tic on the 2nd number line closest to the tic on the 1st. Likewise when you reconstruct the original from the rounded base 10 value.

The critical observation I had was that in order to allow exact reconstruction, the base 10 step size (quantum) has to be < the base 2 quantum. Otherwise, you inevitably get the bad reconstruction shown in red.

Take the specific case of when the exponent is 0 for the base2 representation. Then the base2 quantum will be 2^-52 ~= 2.22 * 10^-16. The closest base 10 quantum that's less than this is 10^-16. Now that we know the required base 10 quantum, how many digits will be needed to encode all possible values? Given that we're only considering the case of exponent = 0, the dynamic range of values we need to represent is [1.0, 2.0). Therefore, 17 digits would be required (16 digits for fraction and 1 digit for integer part).

For exponents other than 0, we can use the same logic:

    exponent    base2 quant.   base10 quant.  dynamic range   digits needed
    ---------------------------------------------------------------------
    1              2^-51         10^-16         [2, 4)           17
    2              2^-50         10^-16         [4, 8)           17
    3              2^-49         10^-15         [8, 16)          17
    ...
    32             2^-20         10^-7        [2^32, 2^33)       17
    1022          9.98e291      1.0e291    [4.49e307,8.99e307)   17

While not exhaustive, the table shows the trend that 17 digits are sufficient.

Hope you like my explanation.