The Curry-Howard isomorphism simply states that types correspond to propositions, and values correspond to proofs.

Int -> Int doesn't really mean much interesting as a logical proposition. When interpreting something as a logical proposition, you're only interested in whether the type is inhabited (has any values) or not. So, Int -> Int just means "given an Int, I can give you an Int", and it is, of course, true. There are many different proofs of it (corresponding to various different functions of that type), but when taking it as a logical proposition, you don't really care.

That doesn't mean interesting propositions can't involve such functions; just that that particular type is quite boring, as a proposition.

For an instance of a function type that isn't completely polymorphic and that has logical meaning, consider p -> Void (for some p), where Void is the uninhabited type: the type with no values (except ⊥, in Haskell, but I'll get to that later). The only way to get a value of type Void is if you can prove a contradiction (which is, of course, impossible), and since Void means you've proved a contradiction, you can get any value from it (i.e. there exists a function absurd :: Void -> a). Accordingly, p -> Void corresponds to ¬p: it means "p implies falsehood".

Intuitionistic logic is just a certain kind of logic that common functional languages correspond to. Importantly, it's constructive: basically, a proof of a -> b gives you an algorithm to compute b from a, which isn't true in regular classical logic (because of the law of excluded middle, which will tell you that something is either true or false, but not why).

Even though functions like Int -> Int don't mean much as propositions, we can make statements about them with other propositions. For instance, we can declare the type of equality of two types (using a GADT):

data Equal a b where
    Refl :: Equal a a

If we have a value of type Equal a b, then a is the same type of b: Equal a b corresponds to the proposition a = b. The problem is that we can only talk about equality of types this way. But if we had dependent types, we could easily generalise this definition to work with any value, and so Equal a b would correspond to the proposition that the values a and b are identical. So, for instance, we could write:

type IsBijection (f :: a -> b) (g :: b -> a) =
    forall x. Equal (f (g x)) (g (f x))

Here, f and g are regular functions, so f could easily have type Int -> Int. Again, Haskell can't do this; you need dependent types to do things like this.

Typical functional languages are not really well-suited to writing proofs, not only because they lack dependent types, but also because of ⊥, which, having type a for all a, acts as a proof of any proposition. But total languages like Coq and Agda exploit the correspondence to act as proof systems as well as dependently-typed programming languages.


Perhaps the best way to understand what it means is to start (or try) using types as propositions and programs as proofs. It is better to learn a language with dependent types, such as Agda (it's written in Haskell and similar to Haskell). There are various articles and courses on that language. Learn you an Agda is incomplete, but it's try to simplify things, just like LYAHFGG book.

Here is an example of a simple proof:

{-# OPTIONS --without-K #-} -- we are consistent

module Equality where

-- Peano arithmetic.
-- 
--   ℕ-formation:     ℕ is set.
-- 
--   ℕ-introduction:  o ∈ ℕ,
--                    a ∈ ℕ | (1 + a) ∈ ℕ.
-- 
data ℕ : Set where
  o : ℕ
  1+ : ℕ → ℕ

-- Axiom for _+_.
-- 
--   Form of ℕ-elimination.
-- 
infixl 6 _+_
_+_ : ℕ → ℕ → ℕ
o + m = m
1+ n + m = 1+ (n + m)

-- The identity type for ℕ.
-- 
infix 4 _≡_
data _≡_ (m : ℕ) : ℕ → Set where
  refl : m ≡ m

-- Usefull property.
-- 
cong : {m n : ℕ} → m ≡ n → 1+ m ≡ 1+ n
cong refl = refl

-- Proof _of_ mathematical induction:
-- 
--   P 0, ∀ x. P x → P (1 + x) | ∀ x. P x.
-- 
ind : (P : ℕ → Set) → P o → (∀ n → P n → P (1+ n)) → ∀ n → P n
ind P P₀ _ o = P₀
ind P P₀ next (1+ n) = next n (ind P P₀ next n)

-- Associativity of addition using mathematical induction.
-- 
+-associative : (m n p : ℕ) → (m + n) + p ≡ m + (n + p)
+-associative m n p = ind P P₀ is m
  where
    P : ℕ → Set
    P i = (i + n) + p ≡ i + (n + p)
    P₀ : P o
    P₀ = refl
    is : ∀ i → P i → P (1+ i)
    is i Pi = cong Pi

-- Associativity of addition using (dependent) pattern matching.
-- 
+-associative′ : (m n p : ℕ) → (m + n) + p ≡ m + (n + p)
+-associative′ o _ _ = refl
+-associative′ (1+ m) n p = cong (+-associative′ m n p)

There you can see (m + n) + p ≡ m + (n + p) proposition as type and its proof as function. There are more advanced techniques for such proofs (e.g., preorder reasoning, combinators in Agda is like tactics in Coq).

What else can be proven:

  • head ∘ init ≡ head for vectors, here.

  • Your compiler produce a program which execution gives the same value as the value obtained in the interpretation of the same (host) program, here, for Coq. This book is also a good reading on the topic of language modeling and program verification.

  • Anything else what can be proven in constructive mathematics, since Martin-Löf's type theory in its expressive power is equivalent to ZFC. In fact, Curry-Howard isomorphism can be extended to physics and topology and to algebraic topology.