The right way to compare a System.Double to '0' (a number, int?)

Sorry, this might be a easy stupid question, but I need to know to be sure.

I have this if expression,

void Foo()
{
    System.Double something = GetSomething();
    if (something == 0) //Comparison of floating point numbers with equality 
                     // operator. Possible loss of precision while rounding value
        {}
}

Is that expression equal with

void Foo()
{
    System.Double something = GetSomething();
    if (something < 1)
        {}
}

? Because then I might have a problem, entering the if with e.g. a value of 0.9.


Well, how close do you need the value to be to 0? If you go through a lot of floating point operations which in "infinite precision" might result in 0, you could end up with a result "very close" to 0.

Typically in this situation you want to provide some sort of epsilon, and check that the result is just within that epsilon:

if (Math.Abs(something) < 0.001)

The epsilon you should use is application-specific - it depends on what you're doing.

Of course, if the result should be exactly zero, then a simple equality check is fine.


If something has been assigned from the result of an operation other than something = 0 then you better use:

if(Math.Abs(something) < Double.Epsilon)
{
//do something
}

Edit: This code is wrong. Epsilon is the smallest number, but not quite zero. When you wish to compare a number to another number, you need to think of what is the acceptable tolerance. Let's say that anything beyond .00001 you don't care about. That's the number you'd use. The value depends on the domain. However, it's mostly certainly never Double.Epsilon.


Your something is a double, and you have correctly identified that in the line

if (something == 0)

we have a double on the left-hand side (lhs) and an int on the right-hand side (rhs).

But now it seems like you think the lhs will be converted to an int, and then the == sign will compare two integers. That's not what happens. The conversion from double to int is explicit and can not happen "automatically".

Instead, the opposite happens. The rhs is converted to double, and then the == sign becomes an equality test between two doubles. This conversion is implicit (automatic).

It is considered better (by some) to write

if (something == 0.0)

or

if (something == 0d)

because then it's immediate that you're comparing two doubles. However, that's just a matter of style and readability because the compiler will do the same thing in any case.

It's also relevant, in some cases, to introduce a "tolerance" like in Jon Skeet's answer, but that tolerance would be a double too. It could of course be 1.0 if you wanted, but it does not have to be [the least strictly positive] integer.