How are denormalized floats handled in C#?
Just read this fascinating article about the 20x-200x slowdowns you can get on Intel CPUs with denormalized floats (floating point numbers very close to 0).
There is an option with SSE to round these off to 0, restoring performance when such floating point values are encountered.
How do C# apps handle this? Is there an option to enable/disable _MM_FLUSH_ZERO
?
Solution 1:
There is no such option.
The FPU control word in a C# app is initialized by the CLR at startup. Changing it is not an option provided by the framework. Even if you try to change it by pinvoking _control87_2() then it is not going to last long; any exception will cause the control word to be reset again by the exception handling implementation inside the CLR. Which was written to deal with another aspect of the FPU control word, it allows unmasking floating point exceptions. It will also be detrimental to any other managed code that will not expect global state to be changed like that.
Having no direct control over the hardware is an implied restriction when you run code in a virtual machine. Not that this is at all easy to do in native code either, libraries tend to misbehave when they too expect the FPU to have the default initialization. Particularly a problem with the exception masking flags, DLLs created with Borland tools have a knack for turning exceptions on, making other code fail that isn't written to deal with such an exception. An extremely ugly problem to solve, the FPU control word is the worst possible global variable you can imagine.
This does put the burden on you to not let your floating point calculations go haywire like this. Calculating with denormals almost always produces nonsense results, if not from the radically small values, then at least from the quick loss of significant digits. Truncating values less than 2.2E-308 to 0 is up to you. Yes, not very practical. Perhaps it is okay for a program to deliver nonsense results a bit slower than normal :)