Can we use the principle of Explosion to justify the definition of implication being True when the antecedent is False?

657 Views Asked by At

I was trying to understand why the last two entries of the truth table for implications are set to True when the antecedent is False (i.e. when A is False in A=>B) and its relation to the principle of explosion (if any). I know this is just the definition of the truth but I am trying to understand (maybe even) at a deeper philosophical level why the definition is the way it is. I don’t believe its an arbitrary choice. For example, I am aware of the way to see an implication as a promise, e.g. this link. Which actually provides a nice intuitive way to remember it. But I believe there is a deeper fundamental reason. If it weren’t like that, I hypothesize (which might be wrong, hence my question!) that a more fundamental inference rule of logic wouldn’t hold. This is my reasoning (I will outline the step I question myself and want to see if its justifiable):

We want to decide how to define the functional A=>B when ~A. The first step is to unpack what happens in a proof of A=>B. [This is the questionable step] When we want to proceed in a proof of an implication, we must start by assuming A is true (i.e. when we start doing "the mathematics of an implication"). Note this will immediately give us that we have A and ~A (if $A$ were actually false). Now, from the principle of explosion we can conclude anything for any B and in particular also ~B. Thus, it follows then that the last two entries of the truth table must both be True. If it were not then the principle of explosion (which depends on conjunctive elimination, disjunctive introduction, disjunctive elimination) must be re-written or some more fundamental inference rule that it depends on. Since the other 3 rules cannot be re-written or ignored since they are arguably more “obviously true”, then $A \implies B$ must be True when A is False.

Does this make sense? Does my assumption of starting the "thinking" or "computation" of an implication by assuming internally inside the implication that A is true? Is that sound? Internally within a implication is this really happening when we go ahead an "prove" an implicaiton?

The reason I am a tiny bit skeptical is because $f_{implication}(A,B) = “A=>B”$ is just a boolean function. Therefore, A is an input ... one can’t just assume it to be true if one plugs in $~A$. However, maybe $f_{implication}(A,B)$ work by algorithmically first considering $A$ is True. Then if $A$ is False happens to be the input, then it automatically gives out True because of the principle of explosion (I am assuming it would have some way of knowing that the statement $A$ actually false and thus it would notice the contradiction and spit out true by the principle of explosion). So I guess I am suggesting a model of how the implication actually works inside of it. The model would be that $f_{implication}(A,B)$ start by assuming $A$ is True and proceeds either by "doing the correct proof" (first two lines of an implication) or by assuming $A$ is True and then noticing a contradiction if the input was actually $\neg A$ (or $A = False$).

Is the model I have in my head for how an implication works correct (or at least good)?


More thoughts:

Note one of the reason that I think this is because to say "$A$ implies $B$" in a logical sense surely should mean that B is derivable from $A$ within a given deductive system of axioms and rules, and that has to mean if A is the case, $B$ also is the case, and conversely if A is the case and B is not the case then A does not imply $B$.

But we should keep in mind what we are doing. We are asking "is $A \implies B$ true when $A$ is false?" Or, equivalently [this is the part I am unsure about], "if $A$ is true, will $B$ also be true when $A$ is false?" But this statement is asking if $B$ follows from a contradiction (assuming both $A$ and $\neg A$). In other words, in the last two lines we assume $A$ is false and then ask whether the statement "with this assumption does assuming $A$ is true imply $B$ is true." But assuming $A$ is true when we are also assuming $A$ is false is to assume a contradiction (thus principle of explosion), and thus implies $B$.

Is this right?


Now that I thought about this more, it seems that the source of the contradiction (explosion) I claim is because of for me an implication is suppose to mean the following:

If A is true does B follow?

Thus, if we assume A is False and at the same time ask "if $A$ then $B$" then the contradiction I claim follows naturally.

Thats I guess the crux of my question. Is that correct? Or is this just a psychological/philosophical detail rather than a purely mathematical one?

5

There are 5 best solutions below

9
On

Yes, you can think of this as coming from the principle of explosion.

I prefer to think of it this way: $A \Rightarrow B$ means that in all cases where $A$ is true, $B$ is also true. (This definition carries more weight once $A$ and $B$ have unbound parameters in them and aren't just single propositions.) To check whether $A \Rightarrow B$, you look at every case in which $A$ is true to check whether $B$ is also true in those cases. So if $A$ is just false, there are no cases that need to be checked, which makes the implication vacuously true.

2
On

$A\Rightarrow B$ where $A$ and $B$ are formulas is a piece of syntax, i.e. symbols on a page (or I guess "screen" would be more contemporary). When we talk about "truth tables", we are interpreting this syntax. For classical propositional logic, this interpretation maps formulas into a two-element set. Often the two elements are referred to as "truth" and "falsity", but they could be any two things, e.g. the words "dog" and "cat". I'll use $0$ and $1$. I'll write $[\![P]\!]$ for the interpretation of the formula $P$. As part of our definition of interpretation, we require that $[\![A\Rightarrow B]\!]=[\![\Rightarrow]\!]([\![A]\!],[\![B]\!])$ where $[\![\Rightarrow]\!]$ is a binary function from the two-element set to the two-element set. This function is what is normally referred to as the "truth table" for $\Rightarrow$. Notice that this function is not given the formulas $A$ and $B$, only their interpretations which are elements of that two-element set, i.e. $0$ or $1$ with my choice.

So it's the interpretation function, $[\![-]\!]$, which that "figures out" whether a formula is "true" or not. Of course, that isn't quite true either. Instead, the interpretation function passes the buck off to you. The interpretation function is usually parameterized by an assignment of $0$ and $1$ to the proposition variables. The interpretation function can tell you whether $\ulcorner\text{it is raining}\urcorner\Rightarrow\ulcorner\text{the ground is wet}\urcorner$, but only after you tell it whether it is raining and whether the ground is wet.

At this point you could, as some have, just say that $[\![\Rightarrow]\!]$ is one of 16 possible "Boolean" functions and we just happened to give this one this name. At which point you can either argue that some other "Boolean" function should have this name (which is usually not the argument made), or you have to argue against this notion of interpretation altogether. (Or, of course, you could not argue and accept it.)

There's another approach to understanding logics, and this is where rules of inference like the principle of explosion live. Here, instead of interpreting formulas into some mathematical objects and saying certain interpretations correspond to "truth", a semantic approach, we combine uses of rules together to make (formal) proofs and say what is "true" is what is provable. Whether it is derivable from other rules or assumed, the principle of explosion just flat out states that $\bot\Rightarrow A$ is provable for all formulas $A$ where I write $\bot$ for the nullary logical connective representing "falsity". Often something like $(B\land\neg B)\Rightarrow A$ is used instead.

For classical propositional logic, we have soundness and completeness theorems that state given the rules of classical propositional logic (or any equivalent set of rules), the notion of syntactic "truth", i.e. provability, coincides with the notion of semantic "truth", i.e. an interpretation with value $1$. This means that the principle of explosion being derivable is the same thing as $[\![\Rightarrow]\!](0,x) = 1$ for all $x$. So, with the soundness and completeness theorems (or even just the soundness theorem), the principle of explosion implies the behavior of $[\![\Rightarrow]\!]$ that you're talking about.

As an epilogue, there are many things we can change to get a different outcome. There are many other non-classical semantics and corresponding non-classical logics though they usually do support the "principle of explosion", but the semantics of that may be more compelling than "truth tables". Even sticking to classical logics, there are other semantics for other classical logics that may be more compelling. Qiaochu Yuan's answer can be viewed as talking about the semantics of classical predicate logic. There, the semantics of the principle of explosion is that the empty set is the subset of any set, i.e. $\emptyset\subseteq X$ for all $X$. This may seem more compelling than a seemingly arbitrary assignment of outcomes in a "truth table". There are also logics that don't have the principle of explosion, so-call paraconsistent logics, which thus necessarily have a different notion of interpretation, though it can be pretty similar looking (or very different looking).

5
On

The OP's question at one point seems to be: assuming the conditional has a truth-table, assuming it is a boolean function, what should its table be?

With that assumption, there are simple and compelling arguments for the usual table, and we certainly don't need to tangle with explosion (we can do, but it is not the simplest way to go).

Here's one argument which should be familiar.

On any view $(P \land Q) \to P$ needs to come out as a logical truth. If $\to$ is boolean, that means it must be a tautology.

But of course, the antecedent and consequent of this tautology $(P \land Q) \to P$ can have any of the values, TT, FT, FF, while this whole conditional is always T. So that forces us to fix the value of the truth-table for conditional connective $\to$ as T on the TT, FT, FF lines of its truth-table.

The fourth line of the table for $\to$ is of course dictated by the triviality that a TF conditional is F.

End of story. There really is nothing to debate about which truth-function is even minimally conditional-like (it is trivial that no other candidate comes close). The hard question of course is whether the ordinary-language (singular, indicative) conditional is in fact truth-functional or boolean at all.

For strong reasons for supposing it isn't truth-functional, see e.g. here. Though probably mathematicians shouldn't care too much. After all, Frege quite explicitly (re)introduced the truth-functional conditional into logic as a very tidy, well-behaved, well-understood, surrogate or replacement for the messy vernacular conditional, a surrogate that works well enough for mathematical purposes. So why not be content with that?!

0
On

Let me introduce a notation. If $\Gamma$ is a (possibly empty) list of logical formulas and $P$ is a logical formula, then the notation

$$ \Gamma \vdash P $$

is the assertion that "there is a proof of $P$ from the hypotheses $\Gamma$". For example,

$$ P, Q \vdash P \wedge Q $$ $$ \text{Peano's Axioms}, 2x=4 \vdash x = 2 $$

One of the ways to characterize implication is:

Theorem For any list of formulas $\Gamma$ and any formulas $P,Q$, the following are equivalent:

  • $\Gamma \vdash P \to Q $
  • $\Gamma, P \vdash Q $

So, starting with the hypotheses in $\Gamma$, a proof of $P \to Q$ can be constructed exactly as you said:

  1. You start by including $P$ in your hypotheses
  2. You prove $Q$
  3. You invoke the above theorem

If you can prove $\neg P$, or more generally, $\Gamma \vdash \neg P$, then you can do step two above exactly as you suggest:

  • $P$
  • $\neg P$
  • $\therefore Q$

All of the above is talking about provability. However, one of the basic theorems of first-order logic is the following.

A truth valuation is a function $v$ that assigns "true" and "false" to all formulas in a way that satisfies obvious properties like

$$ v(P \wedge Q) = f_{\mathrm{conjunction}}(v(P), v(Q)) $$

Then we have

Theorem: The following assertions are equivlaent:

  • $\Gamma \vdash P$
  • Every truth valuation with the property that $v(Q) = \mathrm{true}$ for each $Q \in \Gamma$ also has the property that $v(P) = \mathrm{true}$

So, as you've done, you can use the rules of inference to determine what the truth table for implication (and the other logical connectives) must be.

0
On

We want to define the functional $A\Rightarrow B$ when $\lnot A.$ When we want to proceed in a proof of an implication, we must start by assuming A is true (i.e. when we start doing "the mathematics of an implication"). Note this will immediately give us that we have $A$ and $\lnot A.$ Now, from the principle of explosion, the last two entries of the truth table must both be True.

No, defining A→B for the case of a false antecedent involves none of the following:

  • assuming that the antecedent is true
  • proving A→B
  • assuming A→B.

Can we use the principle of Explosion to justify the definition of implication being True when the antecedent is False?

The principle of explosion precisely means (is a name for), rather than merely justifies, the definition $$\text{for every well-formed sentence $S,$ the sentence (False}\to S)\text{ is true}.$$

start by assuming A is true (i.e. when we start doing "the mathematics of an implication")

It is because of the principle of explosion (vacuous truth) that proving $$A\implies B$$ does not necessitate considering the case for which $A$ is false.