I know, I know, there are tons of questions on this -- I've read them all, it feels like. I don't understand why $(F \implies F) \equiv T$ and $(F \implies T) \equiv T$.

One of the best examples I saw was showing how if you start out with a false premise like $3=5$ then you can derive all sorts of statements that are true like $8=8$ but also false like $6=10$, hence $F \implies T$ is true but so is $F \implies F$.

But for me examples don't always do it for me because how do I know if the relationship always holds even outside the example? Sometimes examples aren't sufficiently generalized.

Sometimes people say "Well ($p \implies q$) is equivalent to $\lnot p \lor q$ so you can prove it that way!" except we arrived at that representation from the truth table in the first place from disjunctive normal form so the argument is circular and I don't find it convincing.

Sometimes people will use analogies like "Well assume we relabeled those two "vacuous cases" three other ways, $F/F, F/T, T/F$ -- see how the end results make no sense?" Sure but T/T makes no sense to me either so I don't see why this is a good argument. Just because the other three are silly doesn't tell me why T/T is not silly.

Other times I see "Well it's just defined that way because it's useful"... with no examples of how it's indeed useful and why we couldn't make do with some other definition. Then this leads to the inevitable counter-responders who insist it's not mere definition of convenience but a consequence of other rules in the system and so on, adding to the confusion.

So I'm hoping to skip all that: Is there some other way to show without a doubt that $(F \implies q) \equiv T$?


Solution 1:

I've never been satisfied with the definition of the material implication in the context of propositional logic alone. The only really important things in the context of propositional logic are that $T \Rightarrow T$ is true and $T \Rightarrow F$ is false. It feels like the truth values of $F \Rightarrow T$ and $F \Rightarrow F$ are just not specified by our intuition about implication. After all, why should "if the sky is green, then clouds are red" be true?

But in predicate logic, things are different. In predicate logic, we'd like to be able to say $\forall x (P(x) \Rightarrow Q(x))$ and have the $x$'s for which $P(x)$ is false not interfere with the truth of the statement.

For example, consider "among all integers, all multiples of $4$ are even". That statement is true even though $1$ is not even (an instance of $F \Rightarrow F$). It's also true even though $2$ is even despite not being a multiple of $4$ (an instance of $F \Rightarrow T$).

But now in classical logic, every proposition has a single truth value. Thus the only way to define $\forall x R(x)$ is "for every $x$, $R(x)$ is true". We can't define it in some other way, like "for every $x$, either $R(x)$ is true or $R(x)$ is too nonsensical to have a truth value". Thus we are stuck defining $F \Rightarrow T$ and $F \Rightarrow F$ to both be true, if $\forall x (P(x) \Rightarrow Q(x))$ is going to behave the way we want.

In a different system of logic, we might do things differently. But in classical logic, "every proposition has a truth value" is basically an axiom.

Solution 2:

Given that we want the $\rightarrow$ to capture the idea of an 'if .. then ..' statement, it seems reasonable to insist that $P \rightarrow P$ is a True statement, no matter what $P$ is, and thus no matter what truth-value $P$ has.

So, if $P$ is False, then we get $\boxed{F \rightarrow F = T}$

It is likewise reasonable to insist that $(P \land Q) \rightarrow P = T$, again no matter what $P$ and $Q$ are.

So, if $P$ is True, and $Q$ is False, we get: $(T \land F) \rightarrow T = \boxed{F \rightarrow T = T}$

Solution 3:

Other times I see "Well it's just defined that way because it's useful"... with no examples of how it's indeed useful

OK, then let's give an example of a real-world use case. I'm a computer programmer by trade, but I also am concerned with the meta-problem of how we know when a program is correct. That is, I use static analysis to understand programs; "implies" as it is defined is extremely useful in this analysis.

Let's suppose I have a list of orders and a reference to a customer, and I happen to know that if the reference is valid, then the list contains at least one order:

if (customer != null)
{
  Assert(orders.Count() > 0);
  Print(orders.First());
}

"Assert" crashes the program if the condition is false.

Let us call a computer program which crashes an "F" program and one which runs without crashing a "T" program.

Now let's look at the truth table of this little program fragment.

cust != null  orders.Count() > 0  Program classification
-----------------------------------------------------
True          True                 T -- because the assertion succeeds
True          False                F -- because the assertion crashes
False         True                 T -- because the assertion never runs at all
False         False                T -- because the assertion never runs at all

Now suppose we had an implies operator in this language. We would want to be able to rewrite our program as

Assert(customer != null  implies  orders.Count() > 0);
if (customer != null)
{
  Print(orders.First());
}

without changing the categorization of the program. In order to maintain the meaning of the program, the truth table of binary operator A implies B must be the same as (NOT A) OR B.

That's why "implies" as defined is useful. It lets us reason accurately and concisely about the correctness of computer programs that contain conditional statements.

Now, you might argue that "implies" is the wrong word to use, because "implies" is imbued with some meaning that you think does not match this truth table. But that's a fact about your intuition; it doesn't change the fact that this operator is useful as defined for reasoning logically about the correctness of programs.

Solution 4:

In this case it is probably a good idea to think of (classical) implication as inclusion in the following sense:

$\varphi \Rightarrow \psi$ holds if the set of witnesses of $\varphi$ is a subset of the witnesses of $\psi$.

An example:

If a natural number is a prime greater than $2$, then the number is odd.

This amounts to saying that the set of primes greater than $2$ is a subset of the odd natural numbers.

The set of witnesses of $\textsf{false}$ is the empty set $\emptyset$.

Consequently, $\textsf{false} \Rightarrow \psi$ is true if $\emptyset$ is a subset of the witnesses of $\psi$. And this is of course always the case.