Conditional proof demonstration

1k Views Asked by At

I know how to use conditional proof assumption to demonstrate that $A \rightarrow C$. For example, having the premises:

  1. A → B ("If A, then B")
  2. B → C ("If B, then C")

We can prove that that $A \rightarrow C$ in the following manner:

  1. A (conditional proof assumption, "Suppose A is true")
  2. B (follows from lines 1 and 3, modus ponens; "If A then B; A, therefore B")
  3. C (follows from lines 2 and 4, modus ponens; "If B then C; B, therefore C")
  4. A → C (follows from lines 3–5, conditional proof; "If A, then C")

However, I do not understand exactly why this works. What is the formal proof for this method?

I know that $A \rightarrow C$ will be true for all cases except when A is TRUE and C is FALSE. Therefore, if A is TRUE, then C must be necessarily TRUE in order for $A \rightarrow C$ to be true. But I don't know wether A is TRUE or FALSE, so why does making the assumption "A is true" always lead us to a correct result?

5

There are 5 best solutions below

2
On BEST ANSWER

What steps 1 through 5 are showing is that the two givens $A \to B$, $B \to C$, together with assumption $A$, logically imply the statement $C$.

Using the $\vDash$ symbol for loigical notation, we can write this as: $$\{ A \to B, B \to C, A \} \vDash C \tag{1}$$

Now, on line 6, you conclude that this means that $A \to C$ follows from the two givens $A \to B$ and $B \to C$ alone, i.e. that $$\{ A \to B, B \to C\} \vDash A \to C \tag{2}$$

Your question is: why would this be correct? Why would the fact that (1) imply that (2)?

Well, for $(2)$ to be the case, it must be the case that for any truth-assignment to the variables $A$, $B$, and $C$, it is impossible for $A \to B$ and $B \to C$ to br true, and $A \to C$ to be false. But, the only way for $A \to C$ to be false, is when $A$ is true and $C$ is false. Hence, $(2)$ would be the case if for any truth-assignment to the variables $A$, $B$, and $C$, it is impossible for $A \to B$ and $B \to C$ and $A$ to be true, and for $C$ to be false. But, that is exactly what $(1)$ is telling us. So, $(1)$ implies $(2)$

... and that is what the rule of Conditional Introduction does in general. Where $\Gamma$ is any set of statements, where $\varphi$ is some assumption, and where $\psi$ is some statement you are able to derive from the asumption, together with the given statementds $\Gamma$, you go from the finding that:

$$\Gamma \cup \varphi \vDash \psi$$

to:

$$\Gamma \vDash \varphi \to \psi$$

and the validity of that follows the same reasoning as above.

2
On

Here is the proof written in natural deduction: $\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}$ $$\fitch{1.~~A\to B\\2.~~B\to C}{\fitch{3.~~A}{4.~~B\hspace{5ex}\to\text{Elim}1,3\\5.~~C\hspace{5ex}\to\text{Elim}2,4}\\6.~~A\to C\hspace{3.2ex}\to\text{Intro3-5}}$$

The rule used in $4,5$ also konwn as Modus ponens, rule for line $6$ also known as Conditional Introduction.
Even we don't know wether A is TRUE or FALSE, but we only need to prove it hold when $A$ is true, since if the condition is false, the statement $A\to C$ is vacuously true.

0
On

Consider that a conditional statement of the form $p \to q$ is false if and only if the antecedent, $p$, is true and the consequent, $q$, is false.

If you assume $p$ is true on its own line and are subsequently able to deduce $q$ using the rules of inference, then $q$ must be true if $p$ is true (because $q$ was validly deduced from $p$ using the rules of inference).

Thus, if $p$ is true, then $q$ is true and the conditional statement $p \to q$ must be true. If $p$ is false, then the conditional statement $p \to q$ is still true. Hence, the conditional method is a valid means of producing conditional statements.

3
On

I know that $A→C$ will be true for all cases except when A is TRUE and C is FALSE.

That is exactly the reason.   The semantics is that $A\to C$ is valued as false only by valuing $A$ as true and $C$ as false.

Therefore there is no need to consider what happens when $A$ is valued as false; since that shall vacuously value $A\to C$ as true.

Thus the syntactic proof only needs to ensure that the truth of $C$ shall be a valid derivation under an assumption of the truth of $A$.

0
On

For those who are unfamiliar with set notation in logic, here I provide a translation of Bram28's awesome demostration, but using simple conjunctions instead of sets.

Let's say that $\delta$ contains all the logical variables of our problem (i.e the variables that can only take the value true or false):

$$\delta = \{A,B,C,D, \ldots\}$$

We also have a collection of givens $P_1(\delta), P_2(\delta), \ldots, P_N(\delta)$ that depend on the aforementioned variables. Now, let's say we want to prove that $A \rightarrow C$ is true based on those givens. That is, we want to prove that the following expression is true for any truth-assignment to the variables contained in $\delta$ (i.e. tautology):

$$(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta)) \rightarrow (A \rightarrow C) \tag{1}$$

At first glance, this may be hard to prove, therefore we will need to use the conditional proof assumption approach to make the process easier.

The conditional proof assumption states that:

If we are able to prove (using the well-known inference rules) that the following expression evaluates to true for any truth-assignment to the variables contained in $\delta$ (i.e. tautology):

$$[(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta)) \wedge A] \rightarrow C \tag{2}$$

then, we can be sure that the following expression (the one we mentioned initially) will also evaluate to true for any truth-assignment to the variables contained in $\delta$ (i.e. tautology):

$$(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta)) \rightarrow (A \rightarrow C) \tag{1}$$

This is because (2) implies (1). Or expressed mathematically:

$$[(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta)) \wedge A] \rightarrow C \Rightarrow (P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta)) \rightarrow (A \rightarrow C)$$

But why does (2) imply (1)? Because when (2) is true, (1) is necessarily true. The only way that (1) can be false is when the antecedent $(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta))$ is true and the consequent $(A \rightarrow C)$ is false. Likewise, the only way that $A \rightarrow C$ can be false is when the antecedent $A$ is true and the consequent $C$ is false. So, in other words, the only way that (1) can be false is when $(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta))$ and $A$ are true, and $C$ is false. But if (2) is true and $(P_1(\delta) \wedge P_2(\delta) \wedge \ldots \wedge P_N(\delta))$ and $A$ are true, then $C$ will also be true. So (2) implies (1).

Therefore, this demonstrates that the conditional proof assumption (CPA) method works.