What does "consistency" mean if formal systems are inherently meaningless?

1k Views Asked by At

In the book Gödel's Proof by Ernest Nagel and James R. Newman, the authors insist that formal systems are to be considered as meaningless mechanical systems, which yield theorems by merely applying syntactic transformation rules on strings composed of meaningless symbols.

But then the authors proceed to give a proof of consistency for the logic of propositions from Principia Mathematica, and here's how they introduce their proof:

Our aim is to show that this set of axioms is not contradictory, that is, to prove "absolutely" that it is impossible by using the Transformation Rules to derive from the axioms a formula $S$ together with its formal negation $\sim{}S$.

My concern is: if we consider formal systems to only handle meaningless strings, how can we possibly say that the strings $S$ and $\sim{}S$ are mutually-incompatible assertions of our formal system, and that if both statements are theorems of our system then the system is inconsistent?

My intuition is that at this point, we're "jumping out" of the formal system, using external rules of inference instead of our formal system's rules, thus it's acceptable to use a model that interprets the $\sim{}$ sign as signifying logical negation, in which case $S$ and $\sim{}S$ are indeed logically incompatible.

But still, I find it difficult to accept this as a valid reasoning. Any insights to help my understanding?

3

There are 3 best solutions below

1
On

A proof system is usually called also calculus (like: propositional calculus) and we can handle it "mechanically", i.e. avoiding any interpretation.

Of course, it is "useful" because we can interpret it (for example, as a way to formalize logical arguments).

It is the same with the addition algorithm of elementary arithmetic: with it you can perform additions in a "mechanical" way, ignoring the fact that you are adding the balls contained into two boxes.


Having defined proof system as a set of rules for "manipulating" formulae, we define a derivability relation between formulae: $⊢$.

With it, we say that a set of formulae $Γ$ is consistent when $Γ⊬⊥$, where $⊥$ (the falsum) may be "instantiated" by every contradiction.

If the system has no $\bot$ symbol and has $\lnot$ symbol, we say that $Γ$ is inconsistent when both $Γ ⊢ \varphi$ and $Γ ⊢ \lnot \varphi$, for some formula $\varphi$.

If we want to avoid also $\lnot$, we say that $Γ$ is consistent when $Γ ⊢ \varphi$ for any formula $\varphi$.

We can prove that (for a "full" language) the above three conditions are equivalent.


When we interpret the formal system with the "usual" truth-functional semantics based on the truth tables for the conncetives, we define :

a formula $\varphi$ is a tautology if $[\varphi]_v =$ t for all valuations $v$

and:

a formula $\varphi$ is a contradiction if $[\varphi]_v =$ f for all valuations $v$.

Under this "convention" we have :

$[\bot]_v =$ f, for all $v$ [thus: the falsum]

and:

a formula $\varphi$ is a tautology iff $\lnot \varphi$ is a contradiction.

Thus, we have may instances of contradictory formulae, like :

$P \land \lnot P$ in propositional logic,

and :

$0=0 \land 0 \ne 0$ in the language of arithmetic.

Under the semantical interpretation, all are equivalent to $\bot$.

4
On

A formal system is defined to be inconsistent if all formulae are provable, i.e., if every formula is a theorem. If (as in Nagel and Newman) the formal system includes the usual rules of propositional logic, then a system is inconsistent iff there is a formula $S$ such that both $S$ and $\lnot S$ are theorems (because by those rules you can derive any formula $T$ given $S$ and $\lnot S$). And it is then reasonable to follow Nagel and Newman and say that a system of axioms is "contradictory" to mean that it is "inconsistent". Nothing that I have just written relies on any assumption about the meaning of formulas. You introduced the idea of assertions being "mutually incompatible" and then drew an unjustified conclusion from that idea: you don't need to "jump outside the system" or use semantic arguments to prove that if $S$ and $\lnot S$ are provable then every formula is provable.

1
On

I have some set $S$ of strings over a certain alphabet, defined in a complicated way. There are two basic ways for $S$ to be uninteresting: maybe $S$ is empty, or maybe $S$ is the set of all strings. Taking $S$ to be the set of strings corresponding to theorems of (say) $PA$, clearly the former case doesn't happen; so you can think of the formalist interpretation of "consistency" as being "interestingness". Ignoring all possible semantic interpretations, $S$ just isn't very interesting if it consists of all strings.

Now we can prove theorems about what $S$ could be. For instance, one thing we can prove is if $\sigma$ and $\neg\sigma$ are in $S$ for some string $\sigma$, then every string is in $S$. And to argue this, we don't have to talk about the meaning of the strings at all.

Note that there is a separate question here: why is the set $S$ something we care about in the first place? This is where formalist responses get more complicated (disclaimer: I'm a formalist), but is orthogonal to your question.


OK, fine: we really shouldn't be talking about $S$ as a set of strings, but rather as a set of well-formed strings: that is, we have the set of all strings, and then the subset of all strings corresponding to wffs, and then inside that is $S$. (Or we can use a weird syntax where every string is well-formed.) But that's a really minor point.