The construction of the term model in completeness proofs of FOL

130 Views Asked by At

In Henkin completeness proof of First Order Logic (FOL) in a language without the equality symbol, after constructing a complete, consistent, saturated set of sentences in a language $L'$ obtained from a language $L$ by adjoining denumerable many fresh constants, a term model is constructed in which the domain of the term model is the set of closed $L'$ terms.

Why is the set of closed $L'$ terms usually chosen, and not simply the set of $L'$ terms simpliciter in the completeness proofs I have seen? Does the completeness proof depend on constructing the term model in this way?

Rob Arthan suggests in a comment that the construction requires the domain to consist of closed terms. But some term model constructions define the domain of the model as the set of all terms of the expanded language which contain only free variables among a certain list $a,b,c,...$ (See cs.toronto.edu/~toni/Courses/438/Mynotes/page31.pdf) So his answer here is not very satisfying.

1

There are 1 best solutions below

0
On

Gödel's original proof of completeness theorem for predicate logic (1929) used the domain of natural numbers

Leon Henkin (1949) is the author of the "modern" version, based on the so-called model-existence lemma.

The author builds a domain of "individuals" adding individual constants to the language. See page 163:

In fact we take as our domain $I$ simply the set of individual constants of $S_{\omega}$, and we assign to each such constant (considered as a symbol in an interpreted system) itself (considered as an individual) as denotation.

So the trick is: instead of assuming the "platonic" realm of numbers, we assume the (less platonic?) realm of symbols; the choice of constants vs closed terms is irrelevant.

This is consistent with Henkin's approach that proves the theorem for a set $\Delta$ of sentences (closed formulas); the extension of the theorem (page 164) to "the case of wff $A$ which contains some free occurrence of an individual variable may be reduced to the case of the closure of $A$ obtained by prefixing to $A$ universal quantifiers with respect to each individual variable with free occurrences in $A$".

Compare with Joseph Shoenfield's Mathematical Logic (1967), page 44:

we are faced with the following problem: given a consistent theory $T$, how shall we find a model of $T$? Since only the theory is given, we must build our model from syntactical materials. Now the expressions of $T$ which designate particular individuals are the variable-free terms. The basic idea is to take these terms as individuals. Actually, the individuals will be sets of variable-free terms; this is necessary because the theorems of $T$ may force two variable-free terms to represent the same individual.

Note: some modern textbook, e.g. Tourlakis' Mathematical Logic (2003), page 64, uses $\mathbb N$ as domain for the Henkin's construction.

Enderton's approach is a little bit different: it consider a set $\Gamma$ of formulas and the basic definition of logical consequence is different.

Henkin's and Shoenfield's definitions boil down to:

$\Gamma$ logically implies $A$ [$\Gamma \vDash A$], iff every model of $\Gamma$ is also a model of $A$,

while Enderton starts from satisfaction:

let $\Gamma$ be a set of wffs, $\varphi$ a wff. Then $\Gamma$ logically implies $\varphi$, written $\Gamma \vDash \varphi$, iff for every structure $\mathfrak A$ for the language and every function $s : \text {Var} \to → |\mathfrak A|$ such that $\mathfrak A$ satisfies every member of $\Gamma$ with $s$, $\mathfrak A$ also satisfies $\varphi$ with $s$.