Why doesn't 'ref' and 'out' support polymorphism?
Take the following:
class A {}
class B : A {}
class C
{
C()
{
var b = new B();
Foo(b);
Foo2(ref b); // <= compile-time error:
// "The 'ref' argument doesn't match the parameter type"
}
void Foo(A a) {}
void Foo2(ref A a) {}
}
Why does the above compile-time error occur? This happens with both ref
and out
arguments.
=============
UPDATE: I used this answer as the basis for this blog entry:
Why do ref and out parameters not allow type variation?
See the blog page for more commentary on this issue. Thanks for the great question.
=============
Let's suppose you have classes Animal
, Mammal
, Reptile
, Giraffe
, Turtle
and Tiger
, with the obvious subclassing relationships.
Now suppose you have a method void M(ref Mammal m)
. M
can both read and write m
.
Can you pass a variable of type
Animal
toM
?
No. That variable could contain a Turtle
, but M
will assume that it contains only Mammals. A Turtle
is not a Mammal
.
Conclusion 1: ref
parameters cannot be made "bigger". (There are more animals than mammals, so the variable is getting "bigger" because it can contain more things.)
Can you pass a variable of type
Giraffe
toM
?
No. M
can write to m
, and M
might want to write a Tiger
into m
. Now you've put a Tiger
into a variable which is actually of type Giraffe
.
Conclusion 2: ref
parameters cannot be made "smaller".
Now consider N(out Mammal n)
.
Can you pass a variable of type
Giraffe
toN
?
No. N
can write to n
, and N
might want to write a Tiger
.
Conclusion 3: out
parameters cannot be made "smaller".
Can you pass a variable of type
Animal
toN
?
Hmm.
Well, why not? N
cannot read from n
, it can only write to it, right? You write a Tiger
to a variable of type Animal
and you're all set, right?
Wrong. The rule is not "N
can only write to n
".
The rules are, briefly:
1) N
has to write to n
before N
returns normally. (If N
throws, all bets are off.)
2) N
has to write something to n
before it reads something from n
.
That permits this sequence of events:
- Declare a field
x
of typeAnimal
. - Pass
x
as anout
parameter toN
. -
N
writes aTiger
inton
, which is an alias forx
. - On another thread, someone writes a
Turtle
intox
. -
N
attempts to read the contents ofn
, and discovers aTurtle
in what it thinks is a variable of typeMammal
.
Clearly we want to make that illegal.
Conclusion 4: out
parameters cannot be made "larger".
Final conclusion: Neither ref
nor out
parameters may vary their types. To do otherwise is to break verifiable type safety.
If these issues in basic type theory interest you, consider reading my series on how covariance and contravariance work in C# 4.0.
Because in both cases you must be able to assign value to ref/out parameter.
If you try to pass b into Foo2 method as reference, and in Foo2 you try to assing a = new A(), this would be invalid.
Same reason you can't write:
B b = new A();
You're struggling with the classic OOP problem of covariance (and contravariance), see wikipedia: much as this fact may defy intuitive expectations, it's mathematically impossible to allow substitution of derived classes in lieu of base ones for mutable (assignable) arguments (and also containers whose items are assignable, for just the same reason) while still respecting Liskov's principle. Why that is so is sketched in the existing answers, and explored more deeply in these wiki articles and links therefrom.
OOP languages that appear to do so while remaining traditionally statically typesafe are "cheating" (inserting hidden dynamic type checks, or requiring compile-time examination of ALL sources to check); the fundamental choice is: either give up on this covariance and accept practitioners' puzzlement (as C# does here), or move to a dynamic typing approach (as the very first OOP language, Smalltalk, did), or move to immutable (single-assignment) data, like functional languages do (under immutability, you can support covariance, and also avoid other related puzzles such as the fact that you cannot have Square subclass Rectangle in a mutable-data world).