Kees Doets's definitions of logical consequence

246 Views Asked by At

everyone, I'm reading Kees Doets's Basic Model Theory (which is freely and legally downloadable from https://web.stanford.edu/group/cslipublications/cslipublications/Online/doets-basic-model-theory.pdf). There are two different definitions of logical consequence in it. The official one ($\vDash$) is on p.6:

$\Sigma$ is a set of formulas, the notation $\Sigma \vDash \phi$ -- $\phi$ follows logically from $\Sigma$ -- is used in the case that $\phi$ is satisfied by an assignment in a model whenever all formulas of $\Sigma$ are.

The other, 'unofficial' one ($\vDash ^*$), is in Exercise 8, which is on p. 7:

Sometimes, logical consequence is defined by: $\Sigma \vDash ^* \phi$ iff $\phi$ is true in every model of $\Sigma$.

It bothers me how these two definitions are different. Indeed, this is exactly what Exercise 8 asks:

Show that if $\Sigma \vDash \phi$, then $\Sigma \vDash ^* \phi$, and give an example showing that the converse implication can fail. Show that if all elements of $\Sigma$ are sentences, then $\Sigma \vDash \phi$ iff $\Sigma \vDash ^* \phi$.

Let me say what I think about the first definition of logical consequence, i.e. $\vDash$. It seems to me that

$\phi$ is satisfied by an assignment in a model whenever all formulas of $\Sigma$ are.

just means

whenever all formulas of $\Sigma$ are satisfied by an assignment in a model, $\phi$ is satisfied by the same assignment in the same model.

which, appears to mean

for all models $\mathcal{A}$, for all assignments $\alpha$, for all $\psi \in \Sigma$, if $\mathcal{A} \vDash \psi [\alpha]$, then $\mathcal{A} \vDash \phi [\alpha]$.

I think I'm right about $\vDash$. But I'm less certain about $\vDash ^*$. From p. 6 of the book,'$\mathcal{A}$ is a model of $\phi$' is just:

$\mathcal{A} \vDash \phi$

which means

$\phi$ is satisfied in $\mathcal{A}$ by every assignment.

Therefore, the right hand side of the definition of $\vDash^*$

$\Sigma \vDash ^* \phi$ iff $\phi$ is true in every model of $\Sigma$.

is just

For all $\mathcal{A}$, for all $\psi \in \Sigma$, if $\mathcal{A}\vDash\psi$, then $\mathcal{A} \vDash \phi$.

However, if what $\mathcal{A} \vDash \phi$ means is just that $\phi$ is satisfied in $\mathcal{A}$ by every assignment, then (and perhaps it is where I have made crucial mistake which I don't understand) I don't see why it cannot be translated also as

For all $\mathcal{A}$, for all assignments $\alpha$, for all $\psi \in \Sigma$, if $\mathcal{A}\vDash\psi [\alpha]$, then $\mathcal{A} \vDash \phi [\alpha]$.

which is just identical to the definition of $\vDash$!

This elementary problem bothers me quite a bit and I hope someone can tell what mistake I have made. Many thanks in advance.

2

There are 2 best solutions below

3
On BEST ANSWER

Your analysis of $\vDash$ is fine, but you've shuffled some (important) parts of the definition of $\vDash^*$. The latter definition says: For every $\mathcal A$, if all assignments in $\mathcal A$ make all the formulas in $\Sigma$ true then all assignments in $\mathcal A$ make $\phi$ true.

The crucial difference is that here the "all assignments" quantifier is applied separately to the "make all of $\Sigma$ true" assumption and the "make $\phi$ true" conclusion, whereas in $\vDash$ the "all assignments" quantifier is applied to the whole implication "if it makes $\Sigma$ true then it makes $\phi$ true."

The underlying general fact here is that quantifiers $\forall x$ cannot be distributed across implications. $\forall x\,(P(x)\to Q(x))$ is not equivalent to $(\forall x\,P(x))\to(\forall x\,Q(x))$. Example: It's true that "if all people are American then all people are left-handed" because the antecedent and consequent are both false. But it's not true that "all Americans are left-handed."

2
On

I think it's best to take the question's hint and understand that the two notions are not equivalent by looking at a counter-example.

Let $\Sigma$ contain only the formula $\forall x (x + y = x)$, and let $A$ be a model of this formula: i.e., for every assignment of a member of $A$'s domain to the variable $y$, the formula is true. This is essentially to say that $x + y = x$ holds for all $x$ and for all $y$ in $A$, because we can assign any member of the domain to $y$. So it turns out that $A \models \forall y \forall x (x + y = x)$. This establishes:

$$ \Sigma \models ^{*} \forall y \forall x (x + y = x) $$

Is it then also the case that

$$ \Sigma \models \forall y \forall x (x + y = x) $$

i.e. for any model $B$ and an assignment to $y$ from the domain of $B$ which satisfies $\forall x (x + y = x)$, does $B$ also satisfy $\forall y \forall x (x + y = x)$? Try to construct a model and assignment where this isn't true. Hint: think of the natural numbers and assign $0$ to $y$.

The subtle point is that, in reasoning about $\models ^{*}$, we require that a model $A$ satisfies all of $\Sigma$ on any assignment in $A$. With plain old $\models$, we only require of a model $B$ that a particular assignment in $B$ satisfies $\Sigma$.