How or why does intutionistic logic proof negations from within the theory, constructively?

I'm having a little of a cognitive dissonance why, in intuitionistic logic, it's possible to show stentences like

$(\neg A \land \neg B) \implies \neg(A\lor B).$

In plain text: If 'A isn't true' as well as 'B isn't true', then neither is 'A or B'.

In terms of types, using the notation

$A\implies B\equiv A\to B,\hspace{.5cm} \neg A\equiv A\to 0,\hspace{.5cm} A\land B\equiv A\times B,\hspace{.5cm} A\lor B\equiv A+ B,$

where $0$ is the empty type (it comes with no type constructors and is said to have no terms), that statement reads

$((A\to 0)\times (B\to 0))\to (A+B)\to 0.$


To demonstrate what I do understand about it, here how I understand the constructive proof of the above statement:

Firstly, the "$(A\to 0)\times (B\to 0)$"-part: If we have $(A\to 0)$ and $(B\to 0)$, this means we have two functions $f:(A\to 0)$ and $g:(B\to 0)$ at hand, which demonstrate absurdness. E.g. if we would have $a:A$, then $f(a):0$, so $f(a)$ is a term of the empty type. More formally, we have a pair $p:=(f,g)$ of type $(A\to 0)\times (B\to 0)$, where we can project out the two absurd functions $\pi_1(p)=f$ and $\pi_2(p)=g$.

Regarding the "$(A+B)$"-part, one possible definition of the type construction of "or" is given by case analysis. With $p$ and hence $f$ and $g$, we have one functions for each case $A$ and $B$, and so a function $h$ of type $(A+B)\to 0$ can readily be constructed. Say we use $a:A$ and $b:B$ as the generic terms, then we can set $h(a):=f(a)$ and $h(b):=g(b)$.

In total, we can give a proof of $((A\to 0)\times (B\to 0))\to (A+B)\to 0$ is a function $P$ which takes a $p$ and a $c$ (either $a$ or $b$) and computes a term of $0$.

$P:=\lambda p.\ \lambda c.\ \mathrm{if}\ (c:A)\ \mathrm{do}\ \pi_1(p)(c)\ \mathrm{else}\ \pi_2(p)(c)$.


Now a more simple example is the proof of $A\implies \neg\neg A$, or

$A\to ((A\to 0)\to 0).$

If we have $A$, and hence a term $a:A$, then if we have a function $f:A\to 0$, we have that $f(a):0$ and so a function proving the above is

$P:=\lambda a.\ \lambda f.\ f(a)$

and of type $A\to ((A\to 0)\to 0).$

Conversely, we can't without further information proof $\neg\neg A\implies A$, or $((A\to 0)\to 0)\to A,$ because from a given function $f:((A\to 0)\to 0)$ we can't reach into its domain and say something about $A$.

Okay, now this this is out of the way, here is my problem:

The proofs of a negation consists of showing a way of how having a term of the assumption type would lead to the result of $0$ being inhabited. And indeed, whenever we successfully proof a negation $\dots \to 0$, we did this by constructing a function which when evaluated would return a term of of type $0$. Now the type $0$ deliberately comes with no type construction rules and so to make the above constructions, we needed to use a function which was also of type $\dots \to 0$. Consequently and more concretely, in both cases above the eventual proofs are functions including a lambda abstraction over such an "absurd function". Now if hence every absurd function is a function of other absurd functions, we can at best concatenate these functions but we can never evaluate any such function. Indeed, if our theory is consistent this already means we can never end up with $f(\dots):0$. But if we can only construct function which can't be evaluated, how on earth is this a "constructive proof"? This isn't a function, it's something to concatenate. Or put differently, we didn't construct a proof, we constructed a line of thought. At best, we can flee to the meta-theory and judge "if we end up with this thing with the right type, then I accept it as proof". But is then completely removed from the semantic content of what was to proof and I also don't know if the meta-logic can be intuitionistic.

@CarlMummert: As I try to explain above, and what I feel two answer now don't adress, is that my problem does not lie in "I proof the absurd by using something absurd", i.e. I'm okay with $\neg A\equiv A\implies ⊥$, but in the representation of this in the Curry-Howard isomorphism and the lambda calculus. I thought about it more now, and my issue likely is in the fact that one defines $\neg A\equiv A\to 0$, although I see no good argument to express the negation using the already given function concept. It turns out to work, so that's one possible justification, but it's a little step that undermines the meaning of the function concept. The problem here is that, as I describe above, for a proof of a negation we describe or "construct" something, which isn't a function. It has the form $\lambda f.\ \mathrm{foo}$, but you can't even pass the totality of its arguments to it, as these arguments not constructable (which happens to be exactly what we want to demonstrate). The proof-object we hence constructed can only be used to have its type checked. The purpose of involving $0$, the type without constructors, in the definition of $\neg A$ here is seemingly only to guaranty we can't proof $A\to 0$ in any other accidental way. Can't we not make $\neg A$ a standalone type which represents the notion of proof of the absurd? Well anyway, if there is no reference other an appealing alternative, I'll just go with the top answer, because I the engineering answer "it just works" is an answer too, I guess.


The constructive version of proving $\lnot A$ is (as you noted) giving a constructive procedure that derives from any hypothetical instance of $A$ an absurdity; this demonstrates that having an instance of $A$ is absurd. So to prove (from hypotheses) $\lnot(A \lor B)$ one gives such a constructive proof assuming an instance of $A\lor B$, and you did that.

Now you complain that this is only a hypothetical construction, based on an impossible assumption. But given that the goal is to prove that the assumption is impossible, what more constructive alternative does one have? There isn't any. The conclusion that any negative statement in a constructive theory is somewhat weak (compared to positive statements) is quite justified though. In particular $\lnot\lnot A$ is not sufficient to deduce $A$, in general (it is though for many particular instances of $A$, for instance it is always so if $A$ is itself negative: $A=\lnot B$, since one does have $\lnot\lnot\lnot B\to\lnot B$).


Another way to understand $\bot$ that you might find more intuitive is that it is the type of a program that doesn't return a value, perhaps because it threw a fatal exception that terminated the computation, or because it went into an infinite loop and never completed. In fact if you construct such a function in a language like Haskell where such things are possible, the type inferencer will infer a type that is compatible with $\bot$. For example

  loop x = loop x

says that to calculate $loop(x)$ for some argument $x$, you first compute $loop(x)$, and that is the answer; this is exactly an infinite loop. Haskell obligingly infers the type to be $a\to b$, where $a$ and $b$ are completely unconstrained, so in particular $a\to\bot = \lnot a$ is a valid instance of this type.

Under this interpretation, we understand $$(\lnot A \land\lnot B) \to \lnot(A\lor B)$$ to mean the following: We are given a function which takes an argument of type $A$ and goes into an infinite loop (that's $\lnot A$), and function which takes an argument of type $B$ and goes into an infinite loop (that's $\lnot B$), and we want to construct from this a function which takes either an $A$ or a $B$ and goes into an infinite loop (that's $\lnot(A\lor B)$).

If we were working in Haskell or some similar language this would be completely trivial, as the $\lnot a$ types are all inhabited axiomatically (the definition above being an example) but in the world of logic there are fewer axioms: we are effectively working in a programming language where we can't start an infinite loop unless someone actually gives us a function that does so when it is called. So the conclusion is not foregone and we actually need the premisses.

As you know, constructing the function whose type is actually $\lnot A \land\lnot B \to \lnot(A\lor B)$ is straightforward: It gets its argument, performs case analysis on it to decide whether it is an $A$ or a $B$, and then calls one of the other components of the input, which initiates the required infinite loop.

You can read through all that again, replacing “goes into an infinite loop” with “throws a fatal exception” and it should still make sense: Your function gets two functions, one of which throws an exception if given an argument of type $A$, and the other of which throws an exception if given an argument of type $B$. It is also given either an $A$ or a $B$, but it doesn't know ahead of time which. It has no way itself to throw an exception. How can it still guarantee that one is thrown?

So to answer your question about $\bot$ representing an uninhabited type, no, we don't only do it out of engineering necessity. We do it because it's one of the ways of understanding types that involve $\bot$. But it is only one of the possible ways of understanding those types. A program that never yields any value, because it dies, or because it never terminates, is also a perfectly good (non-)producer of $\bot$ values, and these interpretations of $\bot$ values are equally important—much more so actually, because as you observed you can't actually exhibit a producer of $\bot$ values in practice, but programs that loop or that abort come up in practice all the time.

The logic doesn't distinguish between computations that abort, computations that get stuck, and computations that are impossible because they return an impossible value, whereas all those things are quite different operationally. Although they're logically equivalent it is sometimes easier to think of one of them or another when trying to understand the operational meaning of a proposition. Instead of thinking about programs that would produce impossible values but don't because they can't, which as you observed is very strange, you can think about programs that die. It comes to the same thing.