Why we need such a restrictions in logics?

152 Views Asked by At

Note:I am not competent is logic so this question may look weird to you.
So as I know there are different types of logics (first-order logic, second-order...), and the difference between them is that each next order logic can quantify over bigger set of objects, and now the question which I have is, why we need such a restriction? Why we can't make logic where we could quantify over all objects? My intuition suggests me that this would bring some paradoxes, is it the real reason, if so is it the only one? If it's not, what is the reason for such restrictions in every logic? Thanks in advance.

2

There are 2 best solutions below

0
On BEST ANSWER

Rather than look specifically at higher- versus first-order logic, let me focus on a broader point: there is a fundamental tension between expressiveness and tameness, and the former isn't always of greater importance.

For example, first-order logic is compact: if $\Gamma$ is a set of first-order sentences, then $\Gamma$ has a model iff every finite subset of $\Gamma$ has a model. Put another way, first-order logic admits a notion of "finitely long proof," since if $\Gamma$ entails $\varphi$ then some finite subset of $\Gamma$ entails $\varphi$ (apply compactness to $\Gamma\cup\{\neg\varphi\}$). Second-order logic meanwhile is not compact, since we can write a sentence $\theta$ in second-order logic which is true in exactly the finite structures (it's a good exercise to prove this, and show that this leads to a failure of compactness).

So more power is not always a good thing: metatheorems about first-order logic, made possible by the restrictiveness of that logic, lead to consequences for classes of structures defined by first-order-expressible conditions. In general we want to strike a balance between power and expressiveness, and this can be quite tricky.


At this point it's a good idea to look at a specific application of the tameness of first-order logic: the Ax-Grothendieck theorem. We want to show that every injective polynomial function $\mathbb{C}^n\rightarrow\mathbb{C}$ is surjective. To do this, we argue as follows:

  • First, we prove - purely algebraically - that the result holds when we replace $\mathbb{C}$ with the algebraic closure of a finite field.

  • Next, we note that "injectivity implies surjectivity for polynomials" is expressible by a set of first-order sentences: there is a first-order theory $\Gamma$ such that whenever $F$ is a field, $F\models\Gamma$ iff every injective polynomial $F^n\rightarrow F$ is surjective.

  • Now we apply compactness to get an algebraically closed field of characteristic zero satisfying $\Gamma$.

  • But such a field is first-order equivalent to $\mathbb{C}$, and so we get that $\mathbb{C}\models\Gamma$, and we're done.

Note that it was crucial that we be able to fit the property we care about into a tame logical framework so that we could pass from the weak examples we could establish purely algebraically to the thing we actually care about.


The general study of logics beyond first-order, and abstract properties of logic, is abstract model theory. One of the first serious results in the subject was Lindstrom's theorem, which showed that first-order logic is maximally strong with respect to a couple important tameness properties.

The best-behaved logic extending first-order logic is generally agreed to be the infinitary logic $\mathcal{L}_{\omega_1,\omega}$, which has applications throughout descriptive set theory, computability theory, and even the model and proof theory of first-order logic itself. It also has some nice tameness properties of its own - nothing quite as good as first-order logic, but still much better than second-order logic (which is terrible in several ways). Dependence logic and its relatives provide another example of a reasonably tame extension of FOL.

On the other hand, logics weaker than first-order logic also crop up, since they often enjoy tameness properties which first-order logic lacks. For example, by Godel we know that first-order logic isn't decidable; however, it has decidable fragments which are still reasonably expressive, and which play serious roles in computer science. And decidability is merely one of the tameness properties we can get by restricting first-order logic (another very important one is the finite model property).

Theoretically, we could also care about logics incomparable in strength to first-order logic. These do indeed exist. However, I don't know of any actually interesting examples (I've only ever seen them show up as weird counterexamples).


It's worth noting that in the above I'm talking only about technical tameness properties. There are also philosophical issues here, and much ink has been spilled over whether second-order logic is "meaningful" in the same way that first-order logic is (and more generally, to what extent quantification is something we can actually do in any serious sense). While these issues are quite interesting, I'm leaving them aside since I think that it's much more compelling, at least initially, to see purely mathematical issues: we can't sweep aside the role of compactness in Ax-Grothendieck by adopting a different philosophical perspective, we're actually forced to confront the mathematical aspects of logic itself.

0
On

If you collapse first-order and second-order, meaning that every property of objects (i.e. a sentence with an object parameter) is itself an object, and you have a collection $Prop$ of all properties, then you can obviously get Russell's paradox by constructing $R(x) :≡ x{∈}Prop ∧ ¬x(x)$. Clearly $R$ is a property and so $R(R) ≡ ¬R(R)$, which is a contradiction. Alternatively, using quantification over $Prop$ instead of membership in $Prop$ we can construct $S(x) :≡ ∃Q{∈}Prop\ ( x=Q ∧ ¬Q(Q) )$ and deduce the following: If $S(S)$ then let $C{∈}Prop$ such that $( S=C ∧ ¬C(C) )$, which gives $¬S(S)$. If $¬S(S)$ then trivially $∃Q{∈}Prop\ ( S=Q ∧ ¬Q(Q) )$ and hence $S(S)$. Either way a contradiction.

To evade both variants, you must restrict which properties are objects (as in ZFC or MK), or must not assume classical logic. For instance, if you switch to 3-valued logic, then the natural conclusion is that $Prop$ does not have boolean membership, so $R(R)$ can be $null$. And similarly equality may not be boolean, so "$S=Q$" may be $null$.