Is there a precise way to state that a structure does not satisfy a theory because its signature is different?

104 Views Asked by At

A structure $\mathcal{S}$ is a triple $<\sigma, D, I>$ containing a signature $\sigma$, which determines what functions and relations the structure is able to interpret, a domain $D$, which specifies what objects are in consideration, and an interpretation $I$, which assigns truth values to predicates, functions, and relations.

Yesterday I learnt that the $\vDash$ and $\not\vDash$ symbols are only defined for structures when the signature of the structure (on the LHS), at least contains the symbols of the theory or sentence (on the RHS).

$$ \mathcal{S}\vDash T$$

This got me wondering: It is a true statement to say that a structure $\mathcal{S}$ does not satisfy a theory $T$, if the structure does not contain in its signature at least the syntax used by the theory $T$ (for instance, it may not contain a predicate the theory $T$ uses).

However, with the way $\vDash$ is defined, we cannot write $\mathcal{S}\not\vDash T$ from this observation.

A question then arises: Is there any way to express that a structure does not satisfy a sentence or theory, because its signature doesn't match, in more precise terminology than natural language?

2

There are 2 best solutions below

0
On BEST ANSWER

For what it's worth I disagree with your initial claim, although of course this will come down to a convention: I would say that "$\not\models$" is exactly the complement of "$\models$," so that if $\varphi$ is not in the signature of $\mathcal{A}$ we have both $\mathcal{A}\not\models\varphi$ and $\mathcal{A}\not\models\neg\varphi$. In particular, under my convention we have the following equivalence: $\varphi$ uses a symbol not in the signature of $\mathcal{A}$ iff $\mathcal{A}\not\models\varphi\vee\neg\varphi$.

However, that's a bit messy - in general, I think that even though interpreting "$\not\models$" as the literal complement of "$\models$" is the right call, I also think that expositions ought not to crucially exploit pathologies when there's a natural way to avoid this. In this case, I think the right call is something like "$\varphi\not\in Sent(Sig(\mathcal{A}))$," where "$Sig(\cdot)$" sends each structure to its signature and "$Sent(\cdot)$" sends each signature to the set of sentences in that signature. These two component functions are quite useful to have lying around, so using them here is a reasonable thing to do.

0
On

You have hit on an interesting kind of question that has been asked since the beginning of the field of mathematical logic.

The general question is: When we define some relation $≈$ between objects of type $S$ and objects of type $T$, we are usually interested in asking whether $x≈y$ only when $x∈S$ and $y∈T$. So what should we do in the other case?

There are two main 'camps' of preference for the answer. Everyone agrees that if we represent $≈$ as a set of pairs from $S×T$, and define "$x≈y$" to be simply "$⟨x,y⟩∈S×T$, then of course nothing is related by $≈$ outside of $S×T$.

The first camp sees no reason not to consider $≈$ as nothing more than that representing set. That is precisely why Noah says "For what it's worth I disagree with your initial claim, although of course this will come down to a convention: I would say that "$⊭$" is exactly the complement of "$⊨$"".

The second camp prefers to consider the expression "$x≈y$" as well-typed iff one has shown that $x∈S$ and $y∈T$. If one has not done so, then one is not permitted to write that expression. Note that this is a meta-level statement; we cannot within the foundational system itself write down some sentence saying that $x≈y$ is well-typed! This is also called strict typing, and such restrictions on what we can write make it correspond more to what we intuitively feel is sensible to write. This is precisely why Noah also said "I also think that expositions ought not to crucially exploit pathologies when there's a natural way to avoid this.". However, be aware that strict typing comes with a rather heavy price; we no longer have a nice clean recursive definition of well-formed expressions! So from the perspective of foundations of mathematics it is best to first see how things are done in a standard FOL system first (as Noah showed) before looking at other systems with stricter typing rules, even if the latter may feel more natural.

For a very simple example, we define $x/y$ for $x,y∈ℝ$ such that $y≠0$. But the cleanest FOL formalization of reals would also define $x/0$ for all $x∈ℝ$. Surprised? Well, if not you cannot have $∀x∈ℝ\ ( x≠0 ⇒ 1/(1/x)=x )$!

By the way, this divide also arises in programming (unsurprisingly), where one kind of programming languages has (mostly) static typing and you simply cannot use objects in ways not allowed by their types, whereas the other kind has (mostly) dynamic typing and you can willy-nilly attempt to use an object in (almost) any way you like (and it may give a run-time error instead of a compile-time error).