Why is typecase a bad thing? [closed]
Both Agda and Idris effectively prohibit pattern matching on values of type Type
. It seems that Agda always matches on the first case, while Idris just throws an error.
So, why is typecase a bad thing? Does it break consistency? I haven't been able to find much information regarding the topic.
It's really odd that people think pattern matching on types is bad. We get a lot of mileage out of pattern matching on data which encode types, whenever we do a universe construction. If you take the approach that Thorsten Altenkirch and I pioneered (and which my comrades and I began to engineer), the types do form a closed universe, so you don't even need to solve the (frankly worth solving) problem of computing with open datatypes to treat types as data. If we could pattern match on types directly, we wouldn't need a decoding function to map type codes to their meanings, which at worst reduces clutter, and at best reduces the need to prove and coerce by equational laws about the behaviour of the decoding function. I have every intention of building a no-middleman closed type theory this way. Of course, you need that level 0 types inhabit a level 1 datatype. That happens as a matter of course when you build an inductive-recursive universe hierarchy.
But what about parametricity, I hear you ask?
Firstly, I don't want parametricity when I'm trying to write type-generic code. Don't force parametricity on me.
Secondly, why should types be the only things we're parametric in? Why shouldn't we sometimes be parametric in other stuff, e.g., perfectly ordinary type indices which inhabit datatypes but which we'd prefer not to have at run time? It's a real nuisance that quantities which play a part only in specification are, just because of their type, forced to be present.
The type of a domain has nothing whatsoever to do with whether quantification over it should be parametric.
Let's have (e.g. as proposed by Bernardy and friends) a discipline where both parametric/erasable and non-parametric/matchable quantification are distinct and both available. Then types can be data and we can still say what we mean.
Many people see matching on types as bad because it breaks parametricity for types.
In a language with parametricity for types, when you see a variable
f : forall a . a -> a
you immediately know a lot about the possible values of f
. Intuitively: Since f
is a function, it can be written:
f x = body
The body needs to be of type a
, but a
is unknown so the only available value of type a
is x
. If the language allows nontermination, f
could also loop. But can it make the choice between looping or returning x
based on the value of x
? No, because a
is unknown, f
doesn't know which functions to call on x
in order to the make the decision. So there are really just two options: f x = x
and f x = f x
. This is a powerful theorem about the behavior of f
that we get just by looking at the type of f
. Similar reasoning works for all types with universally quantified type variables.
Now if f
could match on the type a
, many more implementations of f
are possible. So we would lose the powerful theorem.