Unique readability of first order languages in Enderton

44 Views Asked by At

I am reading Enderton's "A Mathematical Introduction to Logic" and I have been puzzled over the way that Enderton decided to prove that no proper initial segment of a term is a term (p.106)

To do this he defines a new function $K$ which looks at the symbols in the string and outputs a value based on simple operations. Then he proves a couple of lemmas such as $K(t)=1$ for all terms $t$ and finally $K(t) <1$ for all proper initial segments of a term.

I wonder why he chooses to adopt this particular method over the others I have usually seen in this context. I know this is usually proven by induction on length or complexity of the terms. However, he has proven a principle of induction for any set generated from a base and closed under a set of operations, why doesn't he use that? Is it because in this case there are an infinity of function operations to check? Wouldn't another (nested) induction take care of that? Is there another deeper reason to prefer Enderton's method over the one mentioned?

EDIT: Assume a definition of terms like that of Enderton; i.e.,

Definition (Terms): The set of terms is the set of expressions that can be built up from constants and variables by applying (zero or more times) the "function" operation.

This is essentially the standard recursive definition. Then, the alternative methods I have seen (as stated clearly above) are induction on length of terms or their complexity. For induction on length, the proof would go along these lines.

Proof: The base is case is trivial as the only proper initial segments of terms of length 1 is the empty string and that is not a term. Assuming the lemma holds for terms of length $n$ a term of length $n+1$ must be of the form $f(t_1,...,t_n)$ for terms of smaller length. Supposing there is a string $\sigma$ that is both a term and a proper initial segment of $t$ leads to contradiction. Therefore, the lemma holds for all terms.