In classical predicate logic it is commonly assumed that the domain of objects is non-empty. This validates inferences such as $$\forall x Fx \models \exists x Fx$$ as well as, if the identity predicate is available, the logical truth of $$\exists x~x=x.$$ What is the motivation for this assumption? In reality it is arguably not a logical contradiction to assume that nothing exists.
In classical predicate logics, why is it usually assumed that at least one object exists?
979 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 4 best solutions below
On
If you allow the domain to be non-empty, then you need to introduce more complexity to handle non-logical symbols.
You can relax or eliminate the assumption of domain non-emptiness, which gives you various flavors of free logic.
Free logic on the face of it addresses a slightly different problem, namely how to handle constant symbols that happen not to refer to anything or partial functions.
However, these are questions that you need to address if you want to allow constant symbols in your non-logical vocabulary. In the theory of Peano Arithmetic, for example, there's a constant $0$. In an empty structure, there's nothing that $0$ can refer to ... so it's not clear how to build a structure with an empty domain that could be or not-be a model of PA.
On
In classic (quantified) predicate logic, we assume that the domain of discourse contains at least one object and that every constant in the language refers to an object in that domain. Of course you can create your own logic system without such idealization, but you'll always bear in mind that we may have no objects to range over in your formula, it will complicate the form of language considerably without much gain in analyzing real issues. Real issues are always grounded upon existing physical or well defined (mental) abstract objects, otherwise it's a somewhat nihilistic or purely imagined unicorn/pegasus-like metaphysical problem which free logic can be employed to describe.
A free logic is a logic with fewer existential presuppositions than classical logic. Free logics may allow for terms that do not denote any object. Free logics may also allow models that have an empty domain. A free logic with the latter property is an inclusive logic.
At a deeper philosophical level, philosopher Quine has a famous dictum
"To be is to be the value of a bound variable."
Existence itself is not a predicate in classic logic consistent with Quine's spirit, so we first have to assume the non-emptiness of the domain to be ranged over for the bound variable. In other words, the existence of all your variables in classic FOL is already implicitly assured by the non-emptiness assumption of the domain of discourse at this genesis stage, later quantifiers further decide their existence under corresponding predicates... As a summary here in classic logic we are all realists and commit to the non-emptiness of any domain of discourse of interest. If it turns out no such object in this non-empty domain satisfies some predicate P(x), then we simply conclude the proposition $\neg \exists xP(x)$ or $\forall x \neg P(x)$.
On
I will present an abbreviated version of the traditional account of one-sorted classical logic's semantics, explain why we can't allow empty models under this semantics, and show how we can modify the traditional account to allow for empty models.
Fix a language $L$ - that is, a collection of function symbols and predicate symbols.
A structure is a non-empty set $M$, together with, for each $n$-ary function symbol $f$, a function $f_M : M^n \to M$, and for each $n$-ary predicate symbol $P$, an $n$-ary predicate $P_M \subseteq M^n$. The structure is usually abusively denoted as $M$.
Recall that we take the set of all variables to be a countably infinite set $V$. Given a structure $M$, a variable assignment for $M$ is defined to be a function $V \to M$. Given a variable assignment $\alpha$, a variable $v \in V$, and some $m \in M$, we define $\alpha[v \mapsto m]$ to be the variable assignment sending $v$ to $m$, and sending $x \neq v$ to $\alpha(x)$.
Given a term $t$ and a variable assignment $\alpha$, we can define the "interpretation of $t$ in $\alpha$", written as $\alpha(t)$, as follows:
$\alpha(v) = \alpha(v)$ whenever $v \in V$ (here, the left side is "the interpretation of $v$ in $\alpha$" and the right side is function application)
$\alpha(f(t_1, ..., t_n)) = f_M(\alpha(t_1), ..., \alpha(t_n))$ whenever $f$ is an $n$-ary function symbol and $t_1, ..., t_n$ are terms.
The statement $M, \alpha \models \phi$, where $M$ is a structure, $\alpha$ is a variable assignment, and $\phi$ is a statement in 1st-order logic in the language $L$, is defined recursively as follows:
$M, \alpha \models t_1 = t_2$ if and only if $\alpha(t_1) = \alpha(t_2)$
$M, \alpha \models P(t_1, ..., t_n)$ if and only if $(\alpha(t_1), ..., \alpha(t_n)) \in P_M$
$M, \alpha \models \top$ always
$M, \alpha \models \bot$ never
$M, \alpha \models A \land B$ if and only if $M, \alpha \models A$ and $M, \alpha \models B$
... (other connectives omitted)
$M, \alpha \models \exists x \phi$ if and only if there is some $m \in M$ such that $M, \alpha[x \mapsto m] \models \phi$
$M, \alpha \models \forall x \phi$ if and only if for all $m \in M$, $M, \alpha[x \mapsto m] \models \phi$
We define $M \models \phi$ to mean that for all $\alpha : V \to M$, it is the case that $M, \alpha \models \phi$. This is read as "$M$ is a model of $\phi$".
Note that this definition immediately runs into a problem if we allow $M$ to be empty. This is because there are no variable assignments $\alpha : V \to M$. Therefore, for all $\phi$, $M \models \phi$. This is very much not what we want! We don't want models of contradictory phenomena. We want there to be models of statements if and only if these statements are consistent.
Thankfully, there is a way to revise this account to allow for empty models.
A flexible variable assignment is defined to be a partial function $\alpha : V' \to M$, where $V'$ is a subset of $V$. A flexible variable assignment $\alpha : V' \to M$ is said to be "compatible with $\phi$" (my own terminology) if $FreeVariables(\phi) \subseteq V'$. Given some flexible variable assignment $\alpha : V' \to M$, some variable $v \in V$, and some $m \in M$, we can define a flexible variable assignment $\alpha[v \mapsto m] : V' \cup \{v\} \to M$ which sends $v$ to $m$ and sends $x \in V'$, $x \neq v$ to $\alpha(x)$.
Once we've defined flexible variable assignments, we can define the interpretation of a term over this flexible assignment. A flexible variable assignment $\alpha : V' \to M$ is compatible with a term $t$ iff $FreeVariables(t) \subseteq V'$. If $\alpha$ is compatible with $t$, then the interpretation of $t$ in $\alpha$, written $\alpha(t)$, is defined by
$\alpha(v) = \alpha(v)$ whenever $v$ is a variable (the right side being actual function application, the left side being "the interpretation of $v$ in $\alpha$)
$\alpha(f(t_1, ..., t_n)) = f_M(\alpha(t_1), ..., \alpha(t_n))$ for all $n$-ary function symbols $f$ and terms $t_1, ..., t_n$
The statement $M, \alpha \models \phi$, where $M$ is a structure, $\alpha$ is a flexible variable assignment on $M$, $\phi$ is a statement in the language $L$, and $\alpha$ is compatible with $\phi$, is recursively defined to mean
$M, \alpha \models t_1 = t_2$ if and only if $\alpha(t_1) = \alpha(t_2)$
$M, \alpha \models P(t_1, ..., t_n)$ if and only if $(\alpha(t_1), ..., \alpha(t_n)) \in P_M$
$M, \alpha \models \top$ always
$M, \alpha \models \bot$ never
$M, \alpha \models (A \land B)$ if and only if $M, \alpha \models A$ and $M, \alpha \models B$.
... (other connectives omitted)
$M, \alpha \models \exists x \phi$ if and only if there exists $m \in M$ such that $M, \alpha[x \mapsto m] \models \phi$.
$M, \alpha \models \forall x \phi$ if and only if for all $m \in M$, $M, \alpha[x \mapsto m] \models \phi$.
The statement $M \models \phi$ means that for all flexible variable assignments $\alpha : V' \to M$ compatible with $\phi$, it is the case that $M, \alpha \models \phi$.
Note that there's a bit more work here. For example, we have to show that if $\alpha$ is compatible with $\forall x \phi$, then $\alpha[x \mapsto m]$ is compatible with $\phi$.
But this does get us out of the earlier conundrum by allowing $M$ to be empty. This is because there is always the empty variable assignment, which is compatible with the statement $\bot$. So it is never the case that $M \models \bot$ even when $M$ is empty.
It turns out that it's not too difficult to slightly modify the normal rules of classical predicate logic so that they are sound and complete over this modified definition of "models". The modified rules prove exactly the same statements as the "ordinary rules" if you assume $\exists x \top$.
Contrary to popular belief, many theorems are more elegantly stated if we allow empty structures. For just one example, the following is one of the most useful tests for quantifier elimination.
The theorem is false if we require the structure $A$ to be non-empty. If you look at Marker's book Model Theory: An Introduction, Theorem 3.1.4, he actually adds the unnatural hypothesis that the language contains a constant symbol in order to get around this issue!
On the other hand, one of the only places that I'm aware of where it really helps to assume all structures are non-empty is the prenex normal form. For example, the sentence $(\forall x\, P(x))\land Q$, where $P$ is a $1$-ary relation symbol and $Q$ is a proposition symbol (a $0$-ary relation symbol) is not equivalent to any sentence in prenex form. Of course, this can be fixed by changing the theorem: every first-order formula is equivalent over non-empty structures to one in prenex form. But I'll admit it is a little bit unsatisfying.
If anyone has other examples of places where forbidding empty structures genuinely seems to improve the theory, I'd like to hear about them. (Though fair warning: I'm a real partisan on this issue, so I'll probably respond by trying to convince you that there's an elegant solution allowing empty structures.)
In my experience, there are really no problems developing the basics of first-order logic with empty structures - once you fix the proof rules to be sound on empty structures, of course! But again, there are proof systems which are sound and complete on empty structures and just as elegant as traditional proof systems, if not more so. If you want to see the details of the completeness theorem worked out, you can look at Chapter 3 of these lecture notes on model theory, where I give a sequent calculus system that is sound and complete for many-sorted first-order logic in which any or all of the sorts are permitted to be empty.