Why does Haskell's `head` crash on an empty list (or why *doesn't* it return an empty list)? (Language philosophy)
Is polymorphism lost by allowing empty list handling? If so, how so, and why? Are there particular cases which would make this obvious?
The free theorem for head
states that
f . head = head . $map f
Applying this theorem to []
implies that
f (head []) = head (map f []) = head []
This theorem must hold for every f
, so in particular it must hold for const True
and const False
. This implies
True = const True (head []) = head [] = const False (head []) = False
Thus if head
is properly polymorphic and head []
were a total value, then True
would equal False
.
PS. I have some other comments about the background to your question to the effect of if you have a precondition that your list is non-empty then you should enforce it by using a non-empty list type in your function signature instead of using a list.
Why does anyone use head :: [a] -> a
instead of pattern matching? One of the reasons is because you know that the argument cannot be empty and do not want to write the code to handle the case where the argument is empty.
Of course, your head'
of type [a] -> Maybe a
is defined in the standard library as Data.Maybe.listToMaybe
. But if you replace a use of head
with listToMaybe
, you have to write the code to handle the empty case, which defeats this purpose of using head
.
I am not saying that using head
is a good style. It hides the fact that it can result in an exception, and in this sense it is not good. But it is sometimes convenient. The point is that head
serves some purposes which cannot be served by listToMaybe
.
The last quotation in the question (about polymorphism) simply means that it is impossible to define a function of type [a] -> a
which returns a value on the empty list (as Russell O'Connor explained in his answer).
It's only natural to expect the following to hold: xs === head xs : tail xs
- a list is identical to its first element, followed by the rest. Seems logical, right?
Now, let's count the number of conses (applications of :
), disregarding the actual elements, when applying the purported 'law' to []
: []
should be identical to foo : bar
, but the former has 0 conses, while the latter has (at least) one. Uh oh, something's not right here!
Haskell's type system, for all its strengths, is not up to expressing the fact that you should only call head
on a non-empty list (and that the 'law' is only valid for non-empty lists). Using head
shifts the burden of proof to the programmer, who should make sure it's not used on empty lists. I believe dependently typed languages like Agda can help here.
Finally, a slightly more operational-philosophical description: how should head ([] :: [a]) :: a
be implemented? Conjuring a value of type a
out of thin air is impossible (think of uninhabited types such as data Falsum
), and would amount to proving anything (via the Curry-Howard isomorphism).
There are a number of different ways to think about this. So I am going to argue both for and against head'
:
Against head'
:
There is no need to have head'
: Since lists are a concrete data type, everything that you can do with head'
you can do by pattern matching.
Furthermore, with head'
you're just trading off one functor for another. At some point you want to get down to brass tacks and get some work done on the underlying list element.
In defense of head'
:
But pattern matching obscures what's going on. In Haskell we are interested in calculating functions, which is better accomplished by writing them in point-free style using compositions and combinators.
Furthermore, thinking about the []
and Maybe
functors, head'
allows you to move back and forth between them (In particular the Applicative
instance of []
with pure = replicate
.)