Is there any way to reduce standard second-order logic to first-order logic?

567 Views Asked by At

By saying "standard second-order logic" I am specifically ruling out Henkin semantics. It is my understanding that the approach generally taken is to map the second-order syntax to first-order syntax and second-order semantics to first-order semantics. Specifically, I would like to know the formal and rigorous definition of "reduction". I have developed a proof that standard second-order logic can be reduced to first-order logic and I have heard that this contradicts existing impossibility results. I think the contradiction results from me not knowing the formal definition of "reduction". The definition I am using is that every proof in standard second-order logic uniquely maps a proof in first-order logic such that the original standard second-order syntax and semantics is derivable from the resulting proof. The mapping is injective meaning for every standard second-order proof there exists one first-order proof but not all first order proofs have a corresponding second-order proof.

I have heard some claim that the definition of "reduction" is elementary equivalence. I know that elementary equivalence would be sufficient to show that standard second-order logic cannot be reduced to first-order logic using the Löwenheim-Skolem Theorem (LST), however, this proof seems to rely on the requirement that standard second-order semantics maps directly to first-order semantics. Have mathematicians considered the possibility of instead mapping standard second-order semantics to first-order syntax? If so, does this make "reduction" possible and what are the drawbacks of such an approach as compared with mapping syntax to syntax and semantics to semantics?

Furthermore, when the reduction is performed, this implies second-order logic becomes complete and consistent. For example, when I reduce the halting problem I get a contradiction, which implies one of the axioms of the standard second-order logic proof is false. In addition, I can prove (in first-order logic) that the false axiom must be the Axiom of Infinity, because when this axiom is corrected by replacing N with a finite subset of N the result is that there is no contradiction and the halting problem becomes decidable.

1

There are 1 best solutions below

8
On

The real inescapable obstacle isn't Lowenheim-Skolem, it's compactness. This prevents any reasonable notion of reduction I can think of, looking purely at how logical implication works within the syntax itself.


The term "reduction" is used in different ways depending on the context. I think the most general one, though, is the following.

First, we'll use a very permissive notion of "logic" - simply be a pair $\mathcal{L}=(S, \triangleright)$ where $S$ is a class of things we call sentences and $\triangleright$ is a relation between sets of sentences we call the entailment relation. There is no inherent semantics here, nor are there restrictions on how complicated either piece can be; however, intuitively "$A\triangleright B$" means "Every model of $A$ is also a model of $B$" (that is, identify $\triangleright$ with $\models$).

The simplest notion of reduction, then, is just a map on sentences which preserves and reflects the entailment relation: given logics $\mathcal{L}_0=(S_0,\triangleright_0)$ and $\mathcal{L}_1=(S_1,\triangleright_1)$, a reduction from $\mathcal{L}_0$ to $\mathcal{L}_1$ is a map $f:S_0\rightarrow S_1$ such that for each $A, B\subseteq S_0$ we have $$A\triangleright_0B\quad\iff\quad f[A]\triangleright_1 f[B]$$ (where "$g[X]$" denotes $\{g(x): x\in X\}$).

Note that we're not saying anything about how complicated $f$ can be - any old map will do.


For concreteness I'll write $\mathcal{L}_1=(S_1,\models_1)$ and $\mathcal{L}_2=(S_2,\models_2)$ for first- and second-order logic, respectively.

The point is that $\mathcal{L}_1$ is compact: if $\Gamma\models_1\varphi$ for $\Gamma\subseteq S_1,\varphi\in S_1$, then we have $\Gamma_0\models_1\varphi$ for some finite $\Gamma_0\subseteq S_1$. However, $\mathcal{L}_2$ is not compact. A counterexample is easy enough to whip up - for example, consider - in the language of PA together with a new constant symbol $c$ - the set $\Gamma$ consisting of the usual system of second-order Peano arithmetic (so the "full" induction principle) together with for each $n$ an axiom saying that $c\ge n$. According to the standard semantics we have $\Gamma\models_2 \perp$ (put another way: $\Gamma$ is unsatisfiable), but no finite subset of $\Gamma$ has this property.

But this rules out the possibility of a reduction from second-order logic to first-order logic. Suppose $f$ were a reduction of $\mathcal{L}_2$ to $\mathcal{L}_1$. Then we have $f[\Gamma]\models_1f(\perp)$. But then by compactness of $\mathcal{L}_1$ we would have $A\models_1f(\perp)$ for some finite $A\subseteq f[\Gamma]$. This $A$ in turn is just $f[\Gamma_0]$ for some finite $\Gamma_0\subseteq \Gamma$, so we must have $$\Gamma_0\models_2\perp$$ since $f[\Gamma_0]\models_1f(\perp)$, and this doesn't occur.

More generally, no non-compact logic can ever be reduced to a compact logic.