Why does >= return false when == returns true for null values?

I have two variables of type int? (or Nullable<int> if you will). I wanted to do a greater-than-or-equal (>=) comparison on the two variables but as it turns out, this returns false if both variables are null, while obviously the == operator returns true.

Can someone explain to me why that is logical because the semantical definition of the >= operator contains the word "or"?


There was a huge debate about this oddity when the feature was originally designed back in C# 2.0. The problem is that C# users are completely used to this being meaningful:

if(someReference == null)

When extending equality to nullable value types, you have the following choices.

  1. Nullable equality is truly lifted. If one or both of the operands is null then the result is neither true, nor false, but null. In this case you can either:

    • a) Make it illegal to have a nullable value type equality in an if statement, because the if statement needs a bool, not a nullable bool. Instead, require everyone to use HasValue if they want to compare to null. This is verbose and irritating.

    • b) Automatically convert null to false. The downside of this is that x==null returns false if x is null, which is confusing and works against people's understanding of null comparisons with reference types.

  2. Nullable equality is not lifted. Nullable equality is either true or false, and comparison to null is a null check. This makes nullable equality inconsistent with nullable inequality.

None of these choices is obviously correct; they all have pros and cons. VBScript chooses 1b, for example. After much debate the C# design team chose #2.


Because Equality is defined separately from Comparability.
You can test x == null but x > null is meaningless. In C# it will always be false.