Under the Curry-Howard correspondence or loosely "proofs-as-programs", do we also have "programs-as-proofs" and what would some arb. program prove?

Yes. But usually the associated proofs are uninteresting.

Remember that, under the correspondence, we have

  • Types $\longleftrightarrow$ Propositions
  • Programs $\longleftrightarrow$ Proofs

So let's look at your sample code. We get a function $\mathtt{gcd} : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$. With just this type, the program proves

"If there is an element of $\mathbb{N} \times \mathbb{N}$, then there is an element of $\mathbb{N}$"

a completely uninteresting theorem, I think you'd agree. In particular, the simple function $\pi_2 (x,y) = y$, which also has type $\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ proves the same theorem. Why? Because they're two functions of the same type!

Now, obviously the code for $\mathtt{gcd}$ is doing something interesting, so surely it proves something more interesting than the naive $\pi_2$ function. The answer to this question depends on how expressive your type system is. If all you have is access to a type $\mathbb{N}$, then you're out of luck unfortunately. But thankfully, through the language of dependent types we can talk about more interesting types, and thus, more interesting propositions.


In the software engineering world, we use types in order to express the desired behavior of a program. Types correspond to specifications for programs.

At the most basic level they tell you what inputs and outputs a given function expects, but with dependent types, we can fully classify the desired behavior of a function. For instance, the naive type of mergesort might be

$$\mathtt{mergesort} : [a] \to [a]$$

all we see is that it takes in a list of things of type $a$ and spits out a list of things of type $a$. But again, this is not an interesting specification. The identity function, or even the function which ignores its input and returns the empty list, also inhabit this type. And again, since types are propositions, we see that $\mathtt{mergesort}$ proves a very uninteresting proposition: "If a list of $a$s exists, then a list of $a$s exists".... like.... yeah, obviously.

So we beef up our type system, and instead look at something like this:

$$\mathtt{mergesort} : [a] \to \{ L : [a] \mid \mathtt{isSorted}(L) \}$$

I'm eliding some syntax, and a formal definition of $\mathtt{isSorted} : [a] \to \mathtt{Bool}$, but hopefully it's clear how to write such a function.

So now $\mathtt{mergesort}$ has to meet a more stringent specification. It's not allowed to return any list of $a$s. It must return a sorted list of $a$s. As a function of this type, $\mathtt{mergesort}$ proves

"If a list of $a$s exists, then a sorted list of $a$s exists"

again, this isn't a groundbreaking theorem but it's better than what we had before.

Unfortunately, the constant empty list function still matches this specification. After all, the empty list is (vacuously) sorted! But we can go again. Maybe we write a type so that $\mathtt{mergesort}$ proves

"If a list of $a$s exists, then there is a sorted list of $a$s with exactly the same elements"

which is beginning to get a bit more interesting as a proposition. This now totally pins down the desired behavior of mergesort. Moreover, think about how you would prove such a proposition. If somebody said "prove that sorted list containing the same elements exists", you would say "just sort it!" and this is exactly the proof that $\mathtt{mergesort}$ provides.


What then, of your $\mathtt{gcd}$ example? Well, as with $\mathtt{mergesort}$, the same program works to prove multiple things. It's really the type that matters. With a little bit of work, your code might be a proof that

"If two natural numbers exist, then there exists a natural number dividing both of them, which is bigger than any other natural number dividing both of them"

which is beginning to look like an interesting (if basic) mathematical proposition!


I hope this helps ^_^


The Curry-Howard correspondence can be seen both as "proofs-as-programs" and "programs-as-proofs", provided that we specify what the logic for proofs and the language for programs are.

Original formulation of Curry-Howard correspondence is between proofs in a specific logic (the implicational fragment of intuitionistic logic, aka minimal logic) and programs in a specific language (simply typed $\lambda$-calculus). Later, the correspondence has been extended to other logics and languages (for instance, minimal logic with implication and conjunction and simply typed $\lambda$-calculus with pairs; classical logic and $\lambda\mu$-calculus; etc.).

One crucial point is that the "programming language" in any version of Curry-Howard correspondence cannot be Turing-complete. Indeed, the logic where proofs are written is coherent, and in proof theory this means that a theorem called cut-elimination or normalization holds, which implies that every program written in the corresponding "programming language" must terminate its execution. Now, it is well-know that a language that does not allow us to write a program that loops forever is not Turing-complete.

To answer your specific question, the program you wrote contains the while construct, which can create infinite loops, therefore it cannot have a logical interpretation in the sense of Curry-Howard, that is, it does not correspond to any proof in any logic.

It is true that there are some workarounds to this limitation. For instance, we can consider as a programming language the simply typed $\lambda$-calculus with a fixpoint combinator, which is Turing-complete (see also a discussion here). In this variant of the simply typed $\lambda$-calculus, the construct while can be encoded. But what is the logical content of such programming language, in the Curry-Howard perspective? Nothing, because the fixpoint combinator would correspond to an axiom in our logical system that breaks cut-elimination theorem.

Roughly speaking, in proof theory, a proof system without cut-elimination theorem represents an incoherent logic, that is, a "non-logic". Said differently, what Curry-Howard shows is that there is an irreconcilable tension between two important aspects (see also here): coherence (in logic) and Turing-completeness (in programming languages). One excludes the other, but a logic without coherence is not a logic, and a programming language that is not Turing-completeness does not guarantee that you can write the program that you would like to write (but there are "programming languages" such as System F that are based on Curry-Howard correspondence and are very expressive, even though are not Turing-complete).

Of course, programming languages used in practice such as C, Python, C++, Java, OCaml, etc., are conceived to be Turing-complete and user-friendly, so for any program of type $A \to B$ written in one of those languages it is inherently impossible to extract a logical proof of $A \to B$ in any proof system, unless the program uses very basic constructs that can be translated into one of the "programming languages" for which Curry-Howard correspondence holds (unfortunately, this is quite rare, for instance it is not the case if the program contains a while loop).


Let $M$ be the measured memory space of a machine running a program written in language $L$, $M$ can be $\Bbb{Z}/2^{32}$ or a standard $\text{int}$ datatype on many machines. Or it might be $\prod_{i=1}^n(\Bbb{Z}/2^{32})$ for some $n \geq 1$.

This is a good measure of what the program does: the program acts on the registers, cache, memory, storage of a machine, and that pretty much covers everything. In order to write to a port, you would use one of the above, e.g.

So then a program taking the datatype $M$ as input and doubling it (modulo $2^{32}$) looks like this in Python:

(Let's assume Python has int32's and not arbitrary precision integers, which it does by default in the standard CPython release.)

a = int(input("a="))
a = a << 1

Call with python double_int.py 2. I know there's no output to the program, but this is for a theoretical argument. So a program $f$ takes a machine in measured state $m \in M$ and mutates the machine so that it's in measured state $f(m) \in M$.

I want to say that a program constitutes an implementation of an algororithm $A$ such that if you run the program $f$ on its (necessarily finite but sometimes huge, memory space $M$), i.e. you compute $f(m)$ for all $m \in M$ and manually check the result, or the result against a trusted other program's result (unit testing), and the result checks out, then the program is a proof that $A$ is realizeable in the language $L$, and that $A$ is indeed a valid algorithm computing what it's stated to.

Thus when $|M| \lt \approx 10^{9}$ and it runs in $\log(m)$ time or lower, one can check within a few seconds to an hour, a "proof of algorithm $A$" using the machine $M$, and the program $f$.