What does $\Sigma$ suppose to mean conceptually in logic and model theory?

950 Views Asked by At

I was learning logic through these notes and I was trying to understand what the conceptual purpose was of defining $\Sigma$ in the context of truth (rather than the context of provability).

In contrast, I think the reason why $\Sigma$ is defined in the context of provability is 100% clear to me. If provability is just a $n$ length sequence of propositions until we arrive at the goal proof $p_n=p$ by using only propositional axioms, elements in $\Sigma$ or/and Modus Ponens (MP), then its seems very plausible that more proofs should be reachable if we allow more propositions to be considered, i.e. the ones in $\Sigma$. However, in this context things are just 100% about symbol games and truth has no relation to provability. So $ \Sigma \vdash p$ makes sense to me.

But when we try to define what $ \Sigma \vDash p$ is suppose to mean conceptually, its not clear to me. I understand what the statement means (it means that $p$ is true over all models of $\Sigma$, which means that for all truth assignments on the atoms $A$ all proposition $p \in \Sigma$ are true). What seems to me is that saying $\Sigma \vDash p$ only adds a quantifier over all truth assignments i.e.

$$ \forall t, t: A \to \{0,1\}, s.t \forall p \in \Sigma, f_p(t) = 1$$

where $f_p : \{0,1\}^A \to \{0,1\}$.

I don't understand why thats an interesting thing to consider by its own. I guess I am wondering if this vocabulary was developed only for the completeness theorems in propositional logic (i.e. showing the equivalence of proof and truth). Or if there is a conceptual idea that this language is trying to model that I am missing (since the meaning of $ \Sigma$ seems a lot more meaningful/important while in the context of truth it seem arbitrary).

Am I missing something?

1

There are 1 best solutions below

2
On

Short version: the semantic approach is both a useful tool in proving facts about deductions, and is the more fundamental of the two when we're studying structures as opposed to deductions (which is something that in fact logic does).


First, let me motivate $\models$ in a purely pragmatic way. Suppose I want to prove a statement of the form "$\Gamma\not\vdash\varphi$." This is a "purely syntactic" claim - it's all about deductions - but the apparatus of formal proof doesn't make it too easy: I have to somehow rule out all possible deductions.

This isn't insurmountable, but the "semantic approach" - that is, using interpretations and $\models$ - often improves things greatly. Specifically, by the soundness theorem it's enough to show $\Gamma\not\models\varphi$. This can be done by coming up with a counterexample - a single interpretation satisfying $\Gamma$ but not satisfying $\varphi$. Note that we've shifted the focus from proving something about all possible proofs to finding a single counterexample; it should be clear why this is at least potentially useful!

  • Here's a quick test: can you think of a way to show that $$\{p\vee q, r\rightarrow (p\rightarrow q), q\leftrightarrow\neg p, \neg q\vee(r\rightarrow p)\}\not\vdash (p\wedge q)\vee r$$ that's faster than just writing down (and checking) the counterexample interpretation $p=\top, q=\perp, r=\perp$?

Even the other direction can benefit from a semantic approach: if I want to show that $\Gamma\vdash\varphi$, by the completeness theorem it's enough to show that $\Gamma\models\varphi$. But now I'm arguing about interpretations, and I can reason about them mathematically as usual. And this sort of reasoning is often much shorter than actually writing out a full formal deduction.

  • This may feel a bit dubious, but you've probably already done it a lot - this is exactly what you're doing when you use truth tables to argue that a sentence is a validity! You're showing that any possible truth assignment to the atomic sentences makes the given sentence true. This is a semantic argument, and the completeness theorem is what makes it okay.

So even if all you care about is the "syntactic side," the semantic approach is still valuable.


Next, let me try to motivate $\models$ "from the ground up." Logic is about more than one thing - we're interested in describing structures as much as we are in analyzing proofs. When we're doing the former, "$\models$" is the inherently-meaningful thing to look at, and "$\vdash$" is a related thing which may or may not be useful at any given point. This isn't necessarily clear when you're only familiar with propositional logic, since the notion of "structure" there is so limited, but in more complicated logics - e.g. first-order (or predicate) logic - the notion of interpretation (or "structure") is much more interesting. A structure in the sense of first-order logic is basically a set together with some relations, functions, and constants. For example, all the algebraic structures you're familiar with - groups, rings, fields, ... - are examples of first-order structures.

Questions about these are then often equivalent to questions about $\models$. Here's a silly example: the purely mathematical question

"Is every group of order $99$ abelian?"

can be rephrased as

"Does $\{\alpha_{asso}, \alpha_{id},\alpha_{inv},\alpha_{99}\}\models\forall x,y(x*y=y*x)$?"

where $\alpha_{asso}$ is the associativity axiom, $\alpha_{id}$ is the identity axiom, $\alpha_{inf}$ is the inverses axiom, and $\alpha_{99}$ says "there are $99$ elements in the domain." Each of these can in fact be written as first-order sentences - this is a good exercise.

Indeed, the semantics often comes first. In first-order logic, we usually begin with the idea of a first-order structure and $\models$, and we justify our choice of proof system via the soundness and completeness theorems.

  • In fact I believe this is how it historically developed as well (that is, with a more-or-less specific "intended semantics" from the beginning). Meanwhile, I don't know but I suspect that the opposite in fact is true of propositional logic: that at least a large part of the deductive apparatus (e.g. "From $a$ and $b$, infer $a\wedge b$") existed prior to any serious discussion of truth assignments as objects in their own right.

Basically, the point is that logic is not only the "mathematical study of deductions" (or similar). Logic also looks at how we use formal languages to analyze structures, and classes of structures, and etc. In this context, the thing we're really interested in (having fixed a language, and a way of interpreting that language in terms of the structures we're studying - that is, a notion of semantics) is "$\models$" - and "$\vdash$," if anything, becomes a useful tool.