How deterministic is floating point inaccuracy?

I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can I always rely on it to produce the same result? What factors might affect this?

  • Time between calculations?
  • Current state of the CPU?
  • Different hardware?
  • Language / platform / OS?
  • Solar flares?

I have a simple physics simulation and would like to record sessions so that they can be replayed. If the calculations can be relied on then I should only need to record the initial state plus any user input and I should always be able to reproduce the final state exactly. If the calculations are not accurate errors at the start may have huge implications by the end of the simulation.

I am currently working in Silverlight though would be interested to know if this question can be answered in general.

Update: The initial answers indicate yes, but apparently this isn't entirely clear cut as discussed in the comments for the selected answer. It looks like I will have to do some tests and see what happens.


From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.

Specific gotchas that I'm aware of:

  1. some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.

  2. floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.

  3. standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.

  4. The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.


The short answer is that FP calculations are entirely deterministic, as per the IEEE Floating Point Standard, but that doesn't mean they're entirely reproducible across machines, compilers, OS's, etc.

The long answer to these questions and more can be found in what is probably the best reference on floating point, David Goldberg's What Every Computer Scientist Should Know About Floating Point Arithmetic. Skip to the section on the IEEE standard for the key details.

To answer your bullet points briefly:

  • Time between calculations and state of the CPU have little to do with this.

  • Hardware can affect things (e.g. some GPUs are not IEEE floating point compliant).

  • Language, platform, and OS can also affect things. For a better description of this than I can offer, see Jason Watkins's answer. If you are using Java, take a look at Kahan's rant on Java's floating point inadequacies.

  • Solar flares might matter, hopefully infrequently. I wouldn't worry too much, because if they do matter, then everything else is screwed up too. I would put this in the same category as worrying about EMP.

Finally, if you are doing the same sequence of floating point calculations on the same initial inputs, then things should be replayable exactly just fine. The exact sequence can change depending on your compiler/os/standard library, so you might get some small errors this way.

Where you usually run into problems in floating point is if you have a numerically unstable method and you start with FP inputs that are approximately the same but not quite. If your method's stable, you should be able to guarantee reproducibility within some tolerance. If you want more detail than this, then take a look at Goldberg's FP article linked above or pick up an intro text on numerical analysis.