Are quantifiers only required because of infinite proposition chains?

323 Views Asked by At

Quantifiers such as $\forall x \ P(x)$ and $\exists x \ P(x)$ are in some ways equivalent to a long conjunction chain being true versus at least one statement being true in a long disjunction chain.

Why do we even need quantifiers at all if we can achieve the equivalent thing with conjunctive/disjunctive chains? Is the only reason (besides convenience) that we may have an infinitely long chain?

What does first order logic / quantifiers do that we absolutely cannot do propositional logic?

3

There are 3 best solutions below

2
On

Non-constructive proofs. One can often prove that a statement must be true of some element of a set without being able to prove it's true of any particular element.

1
On

Right ... you may end up with infinitely long statements ... which is obviously not practical: you coudln't actually write out those statements in real life, and most proofs involving such statements would end up taking infinitely many steps, so you coudln't prove anything in real life either.

Indeed, most logics will simply not allow infinitely long statements. Only a very special class of logics, called infinitary logics allow infinitely many long statements and infinitely many long proofs.

Also, many times when you do proofs you don't know what the domain is. For example, when you derive $\forall x \ \neg P(x)$ from $\neg \exists x \ P(x)$, you do this without making any assumptions abut the domain. Indeed, that is one of the selling points about logic: that it can demonstrate consequences and equivalences regardless of the domain, meaning that its results can be applied to any domain. And if you do a proof where you write out a quantifier as an infinitely long conjunction or disjunction, then you are already assuming the domain to be of a certain nature, namely that the domain is enumerable: for non-enumerable domains even an infinitely long conjunction or disjunction would not capture all the elements of the domain.

0
On

The second part of Bram28's answer I think gets to the real heart of the matter, so let me expand on that.

One of the key purposes of logic is to categorize structures. Any set $\Sigma$ of sentences in a language $L$ defines a corresponding class of $L$-structures: $$Mod(\Sigma)=\{\mathcal{M}: \mathcal{M}\models\Sigma\}.$$ For example, taking $L$ to be the language with a single binary function symbol $*$ and $\Sigma=\{\forall x,y,z((x*y)*z=x*(y*z))\}$ we get that $Mod(\Sigma)$ is the class of semigroups. Some natural classes of structures are first-order characterizable (like the class of semigroups) while others aren't (like the class of torsion groups).

The key point here is that we're looking at the same sentences across different domains. Replacing "$\forall x$" with "$\bigwedge_{x\in dom(\mathcal{M})}$" works in the context of a single fixe structure $\mathcal{M}$, but what if you want to talk about multiple structures at once?

This is especially important if you want to build a structure with certain properties, where I can say "I want my structure to satisfy $\forall x\exists y(P(x,y))$" without having fully determined the domain yet.