What is predicativity?
Solution 1:
The central question of these type systems is: "Can you substitute a polymorphic type in for a type variable?". Predicative type systems are the no-nonsense schoolmarm answering, "ABSOLUTELY NOT", while impredicative type systems are your carefree buddy who thinks that sounds like a fun idea and what could possibly go wrong?
Now, Haskell muddies the discussion a bit because it believes polymorphism should be useful but invisible. So for the remainder of this post, I will be writing in a dialect of Haskell where uses of forall
are not just allowed but required. This way we can distinguish between the type a
, which is a monomorphic type which draws its value from a typing environment that we can define later, and the type forall a. a
, which is one of the harder polymorphic types to inhabit. We'll also allow forall
to go pretty much anywhere in a type -- as we'll see, GHC restricts its type syntax as a "fail-fast" mechanism rather than as a technical requirement.
Suppose we have told the compiler id :: forall a. a -> a
. Can we later ask to use id
as if it had type (forall b. b) -> (forall b. b)
? Impredicative type systems are okay with this, because we can instantiate the quantifier in id
's type to forall b. b
, and substitute forall b. b
for a
everywhere in the result. Predicative type systems are a bit more wary of that: only monomorphic types are allowed in. (So if we had a particular b
, we could write id :: b -> b
.)
There's a similar story about [] :: forall a. [a]
and (:) :: forall a. a -> [a] -> [a]
. While your carefree buddy may be okay with [] :: [forall b. b]
and (:) :: (forall b. b) -> [forall b. b] -> [forall b. b]
, the predicative schoolmarm isn't, so much. In fact, as you can see from the only two constructors of lists, there is no way to produce lists containing polymorphic values without instantiating the type variable in their constructors to a polymorphic value. So although the type [forall b. b]
is allowed in our dialect of Haskell, it isn't really sensible -- there's no (terminating) terms of that type. This motivates GHC's decision to complain if you even think about such a type -- it's the compiler's way of telling you "don't bother".*
Well, what makes the schoolmarm so strict? As usual, the answer is about keeping type-checking and type-inference doable. Type inference for impredicative types is right out. Type checking seems like it might be possible, but it's bloody complicated and nobody wants to maintain that.
On the other hand, some might object that GHC is perfectly happy with some types that appear to require impredicativity:
> :set -Rank2Types
> :t id :: (forall b. b) -> (forall b. b)
{- no complaint, but very chatty -}
It turns out that some slightly-restricted versions of impredicativity are not too bad: specifically, type-checking higher-rank types (which allow type variables to be substituted by polymorphic types when they are only arguments to (->)
) is relatively simple. You do lose type inference above rank-2, and principal types above rank-1, but sometimes higher rank types are just what the doctor ordered.
I don't know about the etymology of the word, though.
* You might wonder whether you can do something like this:
data FooTy a where
FooTm :: FooTy (forall a. a)
Then you would get a term (FooTm
) whose type had something polymorphic as an argument to something other than (->)
(namely, FooTy
), you don't have to cross the schoolmarm to do it, and so the belief "applying non-(->)
stuff to polymorphic types isn't useful because you can't make them" would be invalidated. GHC doesn't let you write FooTy
, and I will admit I'm not sure whether there's a principled reason for the restriction or not.
(Quick update some years later: there is a good, principled reason that FooTm
is still not okay. Namely, the way that GADTs are implemented in GHC is via type equalities, so the expanded type of FooTm
is actually FooTm :: forall a. (a ~ forall b. b) => FooTy a
. Hence to actually use FooTm
, one would indeed need to instantiate a type variable with a polymorphic type. Thanks to Stephanie Weirich for pointing this out to me.)
Solution 2:
Let me just add a point regarding the "etymology" issue, since the other answer by @DanielWagner covers much of the technical ground.
A predicate on something like a
is a -> Bool
. Now a predicate logic is one that can in some sense reason about predicates -- so if we have some predicate P
and we can talk about, for a given a
, P(a)
, now in a "predicate logic" (such as first-order logic) we can also say ∀a. P(a)
. So we can quantify over variables and discuss the behavior of predicates over such things.
Now, in turn, we say a statement is predicative if all of the things a predicate is applied to are introduced prior to it. So statements are "predicated on" things that already exist. In turn, a statement is impredicative if it can in some sense refer to itself by its "bootstraps".
So in the case of e.g. the id
example above, we find that we can give a type to id
such that it takes something of the type of id
to something else of the type of id
. So now we can give a function a type where an quantified variable (introduced by forall a.
) can "expand" to be the same type as that of the entire function itself!
Hence impredicativity introduces a possibility of a certain "self reference". But wait, you might say, wouldn't such a thing lead to contradiction? The answer is: "well, sometimes." In particular, "System F" which is the polymorphic lambda calculus and the essential "core" of GHC's "core" language allows a form of impredicativity that nonetheless has two levels -- the value level, and the type level, which is allowed to quantify over itself. In this two-level stratification, we can have impredicativity and not contradiction/paradox.
Although note that this neat trick is very delicate and easy to screw up by the addition of more features, as this collection of articles by Oleg indicates: http://okmij.org/ftp/Haskell/impredicativity-bites.html
Solution 3:
I'd like to make a comment on the etymology issue, since @sclv's answer isn't quite right (etymologically, not conceptually).
Go back in time, to the days of Russell when everything is set theory— including logic. One of the logical notions of particular import is the "principle of comprehension"; that is, given some logical predicate φ:A→2
we would like to have some principle to determine the set of all elements satisfying that predicate, written as "{x | φ(x) }
" or some variation thereon. The key point to bear in mind is that "sets" and "predicates" are viewed as being fundamentally different things: predicates are mappings from objects to truth values, and sets are objects. Thus, for example, we may allow quantifying over sets but not quantifying over predicates.
Now, Russell was rather concerned by his eponymous paradox, and sought some way to get rid of it. There are numerous fixes, but the one of interest here is to restrict the principle of comprehension. But first, the formal definition of the principle: ∃S.∀x.S x ↔︎ φ(x)
; that is, for our particular φ
there exists some object (i.e., set) S
such that for every object (also a set, but thought of as an element) x
, we have that S x
(you can think of this as meaning "x∈S
", though logicians of the time gave "∈
" a different meaning than mere juxtaposition) is true just in case φ(x)
is true. If we take the principle exactly as written then we end up with an impredicative theory. However, we can place restrictions on which φ
we're allowed to take the comprehension of. (For example, if we say that φ
must not contain any second-order quantifiers.) Thus, for any restriction R
, if a set S
is determined (i.e., generated via comprehension) by some R
-predicate, then we say that S
is "R
-predicative". If every set in our language is R
-predicative then we say that our language is "R
-predicative". And then, as is often the case with hyphenated prefix things, the prefix gets dropped off and left implicit, whence "predicative" languages. And, naturally, languages which are not predicative are "impredicative".
That's the old school etymology. Since those days the terms have gone off and gotten lives of their own. The ways we use "predicative" and "impredicative" today are quite different, because the things we're concerned about have changed. So it can sometimes be a bit hard to see how the heck our modern usage ties back to this stuff. Honestly, I don't think knowing the etymology really helps any in terms of figuring out what the words are really about (these days).