Visualizing Concepts in Mathematical Logic

Solution 1:

Something that I sometimes find useful is to consider what's called the Curry-Howard Isomorphism: statements of logic correspond to the types of computer programs, and the statements are theorems if and only if the corresponding type is the type of a program that can actually exist.

In this correspondence, $A\to B$ represents the type of a function in a program that takes an argument of type $A$ and calculates a value of type $B$.

A function of type $A\to A$ is obviously possible, because the function just takes the $A$ that it got and gives it right back:

 define function f(a) {
   return a;
 }

And $A\to(B\to A)$ is also possible: the function needs to take an argument of type $A$ and then give back a function of type $B\to A$. It can do this, because it can give back a second function that ignores the $B$ that it got, and that gives back the $A$ that the first function got to begin with:

 define function f(a) {
   define function g(b) {
     return a;
   }
   return g;
 }

But in contrast, $A\to B$ is not a theorem, because how can you write a function that gets an argument of type $A$ and gives back a result of some other, completely arbitrary type? Where could it get a $B$ from if all it has is an $A$?

The isomorphism also identifies logical AND with pairing: a value of type $A\land B$ is a pair that contains both an $A$ and a $B$, and either (or both) can be extracted. It identifies logical OR with what's called a disjoint union: A value of type $A\lor B$ is either an $A$ or a $B$, but not both, and the program can tell which of the two it is. Similarly there are interpretations for negation, for constant true and false values, and so forth.

For a more elaborate example, a program that corresponds to the theorem $\lnot\lnot(P\lor\lnot P)$, see this answer.

The only catch is that not every classically true theorem corresponds to a program in a completely straightforward way. Some theorems of logic, such as $((A\to B)\to A)\to A$, (Peirce's law) don't correspond to a simple program. Instead they correspond to programs that use advanced features like continuations or exceptions.

Still it's often helpful to think of logical theorems as specifications for programs; if you can think of a program which performs the corresponding calculation, then the statement must be a theorem.

Viewed in this way, the statement you originally asked about, $((P\to Q)\to Q)\to Q$, seems quite unlikely: you need a function $f$ which gets an argument $x$, which is itself a function of type $(P\to Q)\to Q$, and $f$ needs to produce a value of type $Q$. If $f$ could call $x$, then $x$ would return the needed value. But it can't call $x$ because to do so it needs to give it an argument of type $P\to Q$, and it doesn't have one.

Solution 2:

I think part of what makes excellent practitioners excellent is that they have many different intuitive representations of the things they work with, and can switch between them effortlessly according to what works well in a given situations. The important thing is to not stick with a single visualization and use that for everything, but to be familiar with a range of possibilities that you can quickly test to see if one of them is helpful for the task at hand.

The Curry-Howard isomorphism (see MJD's answers) is great to know, especially if you already have (functional) programming experience. Closely related to it is the intuitionistic conception of proofs as conversational games, an informal version of which is often used to explain the semantics of quantifiers to beginning students ("the adversary chooses an $\epsilon>0$ and then we have to produce a $\delta>0$ such that ..."). This can be extended to propositional connectives: If we want to prove $P\land Q$, the adversary chooses whether he wants to see our proof of $P$ or of $Q$, but to prove $P\lor Q$, we get to choose which of the proofs to present. If we must prove $P\to Q$, the adversary starts by giving us his proof of $P$, and afterwards we must prove $Q$.

More specifically visual, Venn diagrams can indeed be useful, but become cumbersome if there are many propositional letters. Personally I often find it surprisingly helpful simply to visualize the abstract syntax tree of a formula instead of its linear written syntax. This is related to (but distinct from) visualizing a propositional formula as a Boolean circuit containing gates and wires.

Even something as primitive as truth tables can be helpful for some purposes. For example, my best intuition about how the possible two-variable Boolean functions relate to each other is in terms of two-dimensional truth tables: $$ \begin{array}{c|cc} \to & 0 & 1 \\ \hline 0 & 1 & 1 \\ 1 & 0 & 1 \end{array} \quad \begin{array}{c|cc} \land & 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 0 & 1 \end{array} \quad \begin{array}{c|cc} \oplus & 0 & 1 \\ \hline 0 & 0 & 1 \\ 1 & 1 & 0 \end{array} \quad \begin{array}{c|cc} \text{a} & 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 1 & 1 \end{array} \quad \ldots $$ where, for example "$\to$" and "$\land$" is similar because their truth tables both have a wedge of three identical squares with an odd one in one of the corners. I can make one into the other by flipping and inverting, which tells me that an AND gate can become an IMPLIES gate by adding inverters to some of the inputs and outputs, and vice versa.