What exactly is the relationship between truth tables and rules of inference?

602 Views Asked by At

When I look at the rules of inference, $$ \begin{align} \land \ \mathrm{I}& \ \frac{p \ \ \ \ \ q}{p \land q} \ \ \ \ \ \ \ \ &\frac{p \land q}{p \ \ \ \ \ q} \land \mathrm{E} \\ \\ \not\rightarrow \mathrm{I}& \ \frac{p \ \ \ \ \lnot q}{p \not\rightarrow q} \ \ \ \ \ \ \ \ &\frac{p \not\rightarrow q}{p \ \ \ \ \lnot q}\not\rightarrow \mathrm{E} \\ \\ \not\leftarrow \mathrm{I} & \ \frac{\lnot p \ \ \ \ \ \ q}{p \not\leftarrow q} \ \ \ \ \ \ \ &\frac{p \not\leftarrow q}{\lnot p \ \ \ \ \ q }\not\leftarrow \mathrm{E} \\ \\ \downarrow \mathrm{I}& \ \frac{\lnot p \ \ \ \lnot q}{p \downarrow q} \ \ \ \ \ \ \ \ &\frac{p \downarrow q}{\lnot p \ \ \ \ \lnot q} \downarrow \mathrm{E} \end{align} $$ and the truth table, $$\begin{array}{c|c} p & q & \lnot p & \lnot q & p \land q & p \not\rightarrow q & p \not\leftarrow q & p \downarrow q \\ \hline 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\ \end{array}$$ for connectives that have only one occurrence of $1$ in their column in a truth table (i.e, $\land, \not\rightarrow, \not\leftarrow, \downarrow $) I'm able to see the connection between the truth table and the rules of inference; the thing(s) on the top part of the rule evaluate(s) to $1$ on the truth table if and only if the thing(s) on the bottom part of the rule evaluate(s) to $1$ on the truth table.

Clearly this observation doesn't extend to any of the other connectives. Most curious to me, the rules for the connectives that have only one occurrence of $0$ in their column in a truth table (i.e., $\lor, \rightarrow, \leftarrow, \uparrow$) are kind of 'ugly', and some seem to have no obvious connection to truth tables at all. $$ \lor \ \mathrm{I} \ \frac{p}{p \lor q} \ \ \ \ \ \ \ \lor \mathrm{I} \ \frac{q}{p \lor q} \ \ \ \ \ \ \ \ \frac{p \rightarrow r \ \ \ q \rightarrow r \ \ \ p \lor q}{r} \lor \mathrm{E} $$ $$ \rightarrow \mathrm{I} \ \frac{p ⟹ q}{p \rightarrow q} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{p \ \ \ \ p \rightarrow q}{q}\rightarrow \mathrm{E} $$ $$ \leftarrow \mathrm{I} \ \frac{p ⟸ q}{p \leftarrow q} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{p \leftarrow q \ \ \ \ q}{p}\leftarrow \mathrm{E} $$ $$ \uparrow \mathrm{I} \ \frac{p ⟹ r}{p \uparrow (q \uparrow r)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{p \ \ \ \ p \uparrow (q \uparrow r)}{r} \uparrow \mathrm{E} $$ If we had two 'provability' relations where the standard one is modelled off of $\rightarrow$ ($⟹$) and nonstandard one is modelled off of $\not\rightarrow$, would the rules for all or any of the ($\lor, \rightarrow, \leftarrow, \uparrow$)-connectives under the nonstandard relation look like the rules for the ($\land, \not\rightarrow, \not\leftarrow, \downarrow $)-connectives under the standard ($⟹$) relation?


UPDATE:

In the original post I made a comment about how the space between premises in a rule is treated like 'and', which lemontree addressed in their answer. Instead of having the space be treated as an 'and' (or 'or'), I am going to explicitly write '$\color{red}{ and }$' (or '$\color{blue}{ or }$') in when that is what is meant.

$ \color{red}{ ⟹ } , \color{red}{⟸}$, and $\color{red}{⟺}$ should be seen the same as the standard reading of $⊢, ⊣$, and $⊣⊢$. For example, $p \color{red}{ \ and \ } q \color{red}{ ⟹ } p \land q$ should be thought of as $p, q ⊢ p \land q$. I mentioned in the original post that that the syntactic relation $⊢$ ($\color{red}{ ⟹ }$) is "modelled off of" the logical connective $\rightarrow$, which Bram28 addressed in their answer. In the same way that $\color{red}{⟺}$ is connected with $\leftrightarrow$, I want to introduce another type of syntactic relation, $\color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}}$, which is meant to have an analogous connection with $\nleftrightarrow$. I hope what I have in mind becomes clear with the (currently incomplete) list below: $$ \begin{align} p \color{red}{ \ and \ } q \color{red}{ ⟺ } p &\land q \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{blue}{ \ or \ } \lnot q \\ p \color{red}{ \ and \ } \lnot q \color{red}{ ⟺ } p &\not\rightarrow q \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{blue}{ \ or \ } q \\ \lnot p \color{red}{ \ and \ } q \color{red}{ ⟺ } p &\not\leftarrow q \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} p \color{blue}{ \ or \ } \lnot q \\ \lnot p \color{red}{ \ and \ } \lnot q \color{red}{ ⟺ } p &\downarrow q \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} p \color{blue}{ \ or \ } q \\ p \color{blue}{ \ or \ } q \color{red}{ ⟺ } p &\lor q \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{red}{ \ and \ } \lnot q \\ p \color{blue}{ \ or \ } \lnot q \color{red}{ ⟺ } p &\leftarrow q \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{red}{ \ and \ } q \\ \lnot p \color{blue}{ \ or \ } q \color{red}{ ⟺ } p &\rightarrow q \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} p \color{red}{ \ and \ } \lnot q \\ \lnot p \color{blue}{ \ or \ } \lnot q \color{red}{ ⟺ } p &\uparrow q \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} p \color{red}{ \ and \ } q \\ ? \color{red}{ ⟺ } p &\leftrightarrow q \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} ? \\ ? \color{red}{ ⟺ } p &\not\leftrightarrow q \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} ? \\ \lnot p \color{red}{\ and \ } \lnot p? \color{red}{ ⟺ } \ \ & \lnot p \ \ \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} p \color{blue}{\ or \ } p? \\ \lnot q \color{red}{\ and \ } \lnot q? \color{red}{ ⟺ } \ \ & \lnot q \ \ \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} q \color{blue}{\ or \ } q? \\ p \color{red}{\ and \ } p? \color{red}{ ⟺ } \ \ & \ \ p \ \ \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{blue}{\ or \ } \lnot p? \\ q \color{red}{\ and \ } q? \color{red}{ ⟺ } \ \ & \ \ q \ \ \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot q \color{blue}{\ or \ } \lnot q ? \\ p \color{blue}{\ or \ } \lnot p? \color{red}{ ⟺ } \ \ &\ \ 1 \ \ \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{red}{\ and \ } p ? \\ p \color{red}{\ and \ } \lnot p? \color{red}{ ⟺ } \ \ &\ \ 0 \ \ \ \color{blue}{ \require{enclose} \enclose{verticalstrike}{⟺}} \lnot p \color{blue}{\ or \ } p? \\ \end{align} $$

In this list the connectives that I thought had unintuitive rules under the standard syntactic relation (i.e., $\lor, \leftarrow, \rightarrow, \uparrow$) would have simple rules if the space was either interpreted as 'or' (instead of 'and') or if the rule was a rule for the second type of syntactic relation (the blue one). For example, $$ \land \ \mathrm{I} \ \color{red}{ \frac{\color{black}{p} \color{red}{ \ and \ } \color{black}{q}}{\color{black}{p \land q}}} \ \ \ \ \ \ \ \ \ \color{red}{ \frac{\color{black}{p \land q}}{\color{black}{p} \color{red}{ \ and \ } \color{black}{q}}} \land \mathrm{E} \\ \ \\ \uparrow \ \mathrm{I} \ \color{blue}{ \frac{\color{black}{p} \color{red}{ \ and \ } \color{black}{q}}{\color{black}{p \uparrow q}}} \ \ \ \ \ \ \ \ \ \color{blue}{ \frac{\color{black}{p \uparrow q}}{\color{black}{p} \color{red}{ \ and \ } \color{black}{q}}} \uparrow \mathrm{E} \\ \ \\ \lor \ \mathrm{I} \ \color{red}{ \frac{\color{black}{p} \color{blue}{ \ or \ } \color{black}{q}}{\color{black}{p \lor q}}} \ \ \ \ \ \ \ \ \ \color{red}{ \frac{\color{black}{p \lor q}}{\color{black}{p} \color{blue}{ \ or \ } \color{black}{q}}} \lor \mathrm{E} \\ \ \\ \downarrow \ \mathrm{I} \ \color{blue}{ \frac{\color{black}{p} \color{blue}{ \ or \ } \color{black}{q}}{\color{black}{p \downarrow q}}} \ \ \ \ \ \ \ \ \ \color{blue}{ \frac{\color{black}{p \downarrow q}}{\color{black}{p} \color{blue}{ \ or \ } \color{black}{q}}} \downarrow \mathrm{E} $$ I think this helps put into context why the rules for $\lor$ in the standard system looks strange (and in the standard system the rules for $\land$ look very simple) but in a nonstandard system the rules for $\lor$ are simple (and the nonstandard system the rules for $\land$ look strange).

At any rate, the entire bottom half of the list I made is just a best guess. The most confusing part is that I can't even guess what goes on the lines including $\leftrightarrow$ and $\not\leftrightarrow$ since it seems like I've already exhausted every possible combination of {$p,q,\lnot p, \lnot q$}{$\color{red}{and} , \color{blue}{or}$}{$p,q,\lnot p, \lnot q$} for each syntactic relation. What's going on here? I think this is at the heart of my question.

Please feel free to ask me for clarification about anything!

3

There are 3 best solutions below

1
On

Question: If we had two 'provability' relations where one is modelled off of $\rightarrow$ ($⊢$) and one is modelled off of $\not\rightarrow$ ($\not⊢$), would the rules for all or any of the ($\lor, \rightarrow, \leftarrow, \uparrow$)-connectives under the $\not⊢$ relation look like the rules for the ($\land, \not\rightarrow, \not\leftarrow, \downarrow $)-connectives under the $⊢$ relation?

No.

By your I/E inference rules, $\not\to$ appears to be "not imply". $~~\lnot(p\to q)\iff p\land\neg q$

However, $\nvdash$ is "not provable", which by your truth-table semantics would mean "The consequent is valued $0$ in at least one row where all the antecedants are valued $1$".

That is: $p\nvdash q$ merely says, "If you assume $p$, then you may not infer $q$." ($q$ may or may not be true when $p$ is.)

(Not provable is not the same as disprovable.)

0
On

I am not exactly sure what you are asking ... or maybe rather: there seem to be many little questions and thoughts in this post in addition to that one question at the end.

Anyway, here is some feedback:

First, you seem to be interested in a connection between truth-tables and inference rules (your title also focuses on this). Well, that connection is simple:

Any inference rule of the form $\phi_1, \phi_2, ... , \phi_n \vdash \psi$ (i.e. you infer a statement from a bunch of other statements) is valid if and only if in their combined truth-table there is no row where all of $\phi_1, \phi_2, ... , \phi_n$ are True ($1$), but $\psi$ is false ($0$). Or equivalently, the rule is valid iff $(\phi_1, \phi_2, ... , \phi_n) \to \psi$ is a tautology (i.e. iff in its truth-table, $(\phi_1, \phi_2, ... , \phi_n) \to \psi$ always evaluates to True.

Note that here you immediately see the connection between the $\vdash$ and the $\to$ that you also observed. But, for now I want to point out that this principle holds for the $\lor$ rule just as much as for the $\land$ rule. That is: $\phi \to (\phi \lor \psi)$ is a tautology just as much as $(\phi \land \psi) \to \phi$ is.

Now, in a natural deduction systems, some inference rules refer not just to statements, but to subproofs. The $\to$ Intro rule is a prime example. The connection with truth-tables is a little less immediate, because what is really going on here (and often hidden by the formalization of the rule), is that the derived statement is not just derived from the assumption(s), but also from what you already knew to be the case at that point in the proof. For example, when $\to$ Intro derives $\phi \to \psi$ from the subproof $\phi \vdash \psi$, what is really the case is that we find that $\psi$ follows from the assumption $\phi$ together with whatever set of other statements $\Gamma$ we earlier assumed to be the case, and therefore we conclude that $\phi \to \psi$ follows from just that set of statements $\Gamma$. Thus, in terms of formal semantics, it is the case that if $\Gamma \cup \{ \phi \} \vDash \psi$, then $\Gamma \vDash \phi \to \psi$, and the $\to$ Intro rule makes that meta-logical truth into a formal inference rule.

By the way, the $\lor$ Elim rule is often given by $\{ \phi \lor \psi, \phi \vdash \chi, \psi \vdash \chi \} \vdash \chi$. That is, in many texts the $\lor$ rule refers to a bunch of subproofs, rather than a bunch of conditionals.

With this deeper understanding of rules that refers to subproofs, we can now make a connection between such rules and truth-tables as well. In fact, let's take the $\lor$ rule as I defined it. Again, let's first point out that this rule is the formalization of the meta-logical theorem that:

If $\Gamma \vDash \phi \lor \psi$, $\Gamma \cup \{ \phi \} \vDash \chi$, and $\Gamma \cup \{ \psi \} \vDash \chi$, then $\Gamma \vdash \chi$

which means that in the combined truth-table for all statements involved, it is the case that for all rows where:

  1. all statements in $\Gamma$ are true and

  2. $\phi \lor \psi$ is true and

  3. $\chi$ is true when $\phi$ is true (or equivalently: $\phi \to \chi$ is true) and

  4. $\chi$ is true when $\psi$ is true (or equivalently: $\psi \to \chi$ is true)

the statement $\chi$ is also true.

0
On

I won't discuss all of the possible 16 binary connectives, but I'll give you a start from which you may infer properties of other connectives on your own.

An inference rule expresses an if-then relation: $$\frac{\psi_1 \ \ldots \psi_n}{\phi}$$ means

If we know $\psi_1$ and ... and $\psi_n$, then we know $\phi$ for sure.

That we know the conclusion for sure is the crucial point: There may be no case (row in the truth table) where the conclusion is false given that the premises are true.

One important observation on the relation to the semantics is that the calculus of natural deduction is

  • sound: If a sequent $\psi_1, ..., \psi_n \vdash \phi$ is derivable, then the inference holds semantically, i.o.w., for all valuations (rows in the truth table), if everything on the left-hand side of the $\vdash$ is 1, then the conclusion on the right-hand side is 1 in that row as well.
  • complete: Every semantic inference is derivable, i.o.w., if there is a sequent of premises and conclusion for which truth is preserved under all truth table rows, then there will be a natural deduction derivation starting with the premises and ending in the conclusion.

Since rules are minimal derivations themselves, they are also justified in the sense of semantic soundness (in fact, the proof of soundness of the system as a whole builds up on the easy justifiability of the atomic rules and the way they are assembled).

In the introduction rules, we go from simpler expressions to more complex expressions. If there is more than one row where the complex expresion (the conclusion of an introduction rule) is true, that means there is more than one way to introduce it (more than one introduction rule).
In the case of $\land$, there is only one way for it to become true, so there is only one introduction rule (modulo commutativity, since, as you correctly conjectured, the space between premises is read as an "and").
In the case of $\lor$, there is more than one row with 1, because there are two possible scenarios under which $p \lor q$ can have become true: Either given $p$ (first introduction rule), or given $q$ (second introduction rule). The first row in the truth table is just a combination of these two base scenarios and thereby already covered by the two introduction rules. (In these cases, where we know both $p$ and $q$ and want to infer $p \lor q$, there will be more than one possible proof: One using the first and one using the second $\lor I$ rule.)
For $\to I$, there is no good truth table interpretation: $p \to q$ becomes true if either $\neg p$ or $q$, so one could define a pair of introduction rules inferring $p \to q$ from $\neg p$ and from $q$, respectively, that would be sound, i.e. correctly reflect the semantics of implication. But natural deduction is not about exhaustively listing the cases where the conclusion may become true -- the rules are under no obligation to visibly reflect rows in a truth table The goal of natural deduction is to capture the way mathematicians would naturally reason in logic, and the intuition is that we can assert $p \to q$ if we have a proof of $q$ starting with the assumption $p$, which motivates the introduction rule in terms of $\vdash$. This rule does also capture the inferences from $\neg p$ to $p \to q$ and from $q$ to $p \to q$ (one can find suitable derivations for all the sound inferences using the given introduction rule: the rules are semantically complete). It's just that the format of the rule does not primarily reflect an exhaustion of cases of truth (rows in a truth table), but rather reasoning about truth in terms of proof.

In the case of elimination rules, we go from more complex expressions to the parts it is composed of.
The connectives which have only one row with 1 are those which have what you observed as straightforward elimination rules: There is only one situation where $p \land q$ becomes true (namely the row where both $p$ and $q$ are true), so given $p \land q$ (the premise), we can conclude $p$ (the conclusion in the first introduction rule) for sure, because there is only one scenario under which the $p \land q$ can be come true, and that is one where $p$ is also known to be true. With a similar argument we also know $q$, so there are two elimination rules for $\land$. In the elimination rule for $\to$, there is only one row where both $p \to q$ and $p$ are true, and that is a row in which we know $q$ to be true. Similar for $\not \to$ with $p$ and $\neg q$: Only one row with 1 in the truth table means we know for sure which simpler statements must have held as well.
The same does not work for $\lor$: Since there is only one occurrence of 0 in the truth table, this means that there is more than one occurrence of 1 -- so there is more than one valuation under which the conclusion can become true. And this means that given the complex expression, there is no subexpression we can conclude to hold for sure. Given $p \lor q$, we can not definitely infer $p$, nor $q$ -- it might be that $p \lor q$ became true under a valuation where $p$ is true, but it might also be that it became true under a valuation where $p$ is false, so we can not infer anything directly about $p$ (and likewise about $q$).
Because we can not draw a conclusion about the subexpressions $p$, $q$, the $\lor E$ rule has a format where instead we infer something about another statement. Indeed, your intuition is correct that the $\lor E$ rule breaks a particular pattern of natural deduction rules in the sense that the conclusion does not contain a subformula of the major premise (major premsie = the premise which contains the connective to be eliminated, minor premises = the other premises; in this case, major premise = $p \lor q$, minor premises = $p \to r, q \to r$). It is for this reason that other variants of the $\lor E$ rule have been proposed (e.g. disjunctive syllogism: from $p \lor q$ and $\neg p$ infer $q$, and a second rule for the symmetric side). But these alternative rules violate other desirable properties of natural deduction rules; for instance, the disjunctve syllogism rule is not "pure" because with $\neg$ it an additional connective besides the one to be eliminated/introduced. Similar proof-theoretic problems arise for potential rules for $\uparrow, \downarrow, \oplus, \leftrightarrow, \not \to$, as partially discussed in the paper by Zach you linked. So yes, some natural deduction rules actually are a bit "ugly".