Propositional Logic - Can you Derive $C \to A$ from $A$ alone, given the introduction rule?
Apparently, according to the Conditional Introduction rule, this is valid: Prove $C \to A$
Source: http://kpaprzycka.wdfiles.com/local--files/logic/W12R Page 5
So before this, the way I viewed the CI rule, was that it allowed us to prove that if we assume the antecedent is true, we can prove the consequent can derived from the antecedent, by referring to the premises which lead to the consequent. If later we find that the antecedent is true, then we can conclude what we already done so before when we just assumed. So basically, sub derivations are inactive at first, but activated if the assumption of said sub derivation is confirmed.
In this example, it's basically stating that the mere assumption of some proposition is enough to state that it applies to some other already proven proposition. It's like you might as well skipped over the sub derivation and said, 'Given $A, C \to A$ because I want it to'.
Personally, I think this is a mistake, though if not, I can't see how you could justify that $C \to A$ given $A$ and assuming $C$ is true alone.
In Natural Deduction, it is correct
given $A$, to derive $C \to A$.
See:
- Jan von Plato, Elements of Logical Reasoning (2013), page 22:
Consider as another case $A ⊃ (B ⊃ A)$.
Verbally, if we assume $A$, then $A$ follows under any other assumption $B$:
$A$ --- hypothesis: goal $B ⊃ A$
$B ⊃ A$ --- 1,$⊃$I
$A ⊃ (B ⊃ A)$ --- 1–2,$⊃$I
This does not look particularly nice: we have closed an assumption $B$ that was not made. But if we say that an assumption was used $0$ times, the thing starts looking more reasonable.
This is nothing more that the "usual" axiom of Hilbert-style propositional calculus:
$\vdash A \to (B \to A)$
that of course is a tautology.
The idea is simply:
"if $A$ is true, then $B \to A$ is true also".
(This means that "$A$ is given" but in no way we have to assume that "$B$ is true also").
In classical logic, where we admit the equivalence of $p \to q$ and $\lnot p \lor q$, the above derivation is quite similar to:
$A$ --- hypothesis
$\lnot B \lor A$ --- 1,$\lor$I
$A \to (\lnot B \lor A)$ --- 1–2,$\to$I
that looks "less weird".
It's not that $C\Rightarrow A$ just because you want it to be. The semantics of implication is defined as being true whenever the antecedent is false or the consequent is true. Since you know $A$ is true, by assumption, then it does not matter if $C$, or whatever other antecedent, is false or "unrelated" (as in "we know it is true that the Earth is round, therefore if unicorns have PhDs, then the Earth is round"). It's all about semantics.
Another way to see these facts is by this argument: a disjunction is defined as being true whenever one of its disjuncts is true. So if we know $A$ is true, it does not matter what we add to it by disjunction. The disjunction will still be true in virtue of $A$'s truth. Now, consider $\neg C\vee A$. We know that is true because of $A$'s truth. And since $\neg C\vee A$ is equivalent to $C\Rightarrow A$, one has justified the truth of the implication you were concerned with.
You may be wondering why one should accept something as my unicorns and Earth example if that is not intuitive at all. I suggest you research topics such as Strong/Strict Implication in non-classical logics. There are many cases when your intuition on the matter is correct, but it is not the case for classical logic.