What inference rules are invalid in empty L-structures?

253 Views Asked by At

I want to understand why empty L-structures are illegal/not allowed in classical FOL. I read here but didn't understand the answer. It seems to be that there is some inference rule in FOL (as described in these notes) that is illegal when the set in the L-structure is empty. But I don't understand why.

Is because of some inference rule with quantifiers that L-structures are illegal or is there a different reason why empty L-structures are disallowed?

I also asked:

Is the reason that vacuous statements are True because empty L-structures are illegal?

as a suggestion that perhaps vacuous statements are the reason but I still don't have an answer for that that explains it. Perhaps its the inference rule. Perhaps its something else. I don't know yet.

2

There are 2 best solutions below

13
On

If we do a careful analysis of the rules of inference for quantifiers in natural deduction or the sequent calculus, we see that the rules for quantifiers don't require the domain to be non-empty. (So this is one answer for what [primitive] rules are valid in an empty domain.)

I'll focus on the universal elimination rule. (Dually, the existential introduction rule has the same issue.) It is often written as $$\frac{\Gamma\vdash\forall x.\varphi(x)}{\Gamma\vdash\varphi(t)}\rlap{\small{\forall I,t}}$$ where $\Gamma$ is a finite set of formulas and $t$ is some term. Written this way, this rule seems to warrant deriving $\bot$ from $\forall x.\bot$, i.e. proving $\neg\forall x.\bot$ which is equivalent to $\exists x.\top$, i.e. the domain being non-empty.

The "trick" is that to apply this rule of inference, we need to have a term, $t$, even if it doesn't occur in the the resulting formula. Here's where a more careful analysis can be helpful. Write $\Gamma\vdash_V \psi$ to mean $\Gamma\vdash\psi$ except that all formulas are only allowed to contain free variables in the set $V$. (Really, $\Gamma\vdash_V\psi$ is the more primitive notion and is useful for defining the universal introduction rule.) More cleanly, we can define a set of terms with free variables in a given set $V$, and (thus) a set of formulas with free variables in $V$. $\vdash_V$ is then a (meta-)relation on sets of formulas with free variables in $V$ and a formula with free variables in $V$. A (conditional) proof is then a derivation of a formula with no free variables, i.e. $\Gamma\vdash_\varnothing\psi$.

To (indirectly) show that the domain is non-empty, we'd want to show $\vdash_\varnothing \neg\forall x.\bot$. Here's an attempted derivation: $$\dfrac{\dfrac{\dfrac{}{\forall x.\bot\vdash_\varnothing\forall x.\bot}\rlap{\small{Ax}}}{\forall x.\bot\vdash_\varnothing\bot}\rlap{\small{\forall I,?}}}{\vdash_\varnothing\neg\forall x.\bot}\rlap{\small{{\to}I}}$$

This derivation fails because, assuming there are no constants in our signature, we have no terms (with no free variables) at all to use to apply universal elimination.

What happens in many contexts, particularly for Hilbert-style systems, is the set of terms always contains all possible variables. In that context, it is always possible to find a term: just pick some free variable. While indexing terms and formulas by sets of free variables is slightly more complicated than simply having a single set of terms/formulas, I find it to be significantly cleaner and clearer.

We get the semantic analogue of the above as follows. If we the set of terms always includes all free variables, then to interpret terms we must say what is done with free variables. This means our interpretation function must specify the values for (typically countably infinitly many) free variables. This forces our semantic domain to be non-empty and is rather awkward. For example, we have uncountably many effectively equivalent interpretations even when we pick a finite semantic domain. If we index our interpretation functions by sets of free variables, then we only need to give values for the finite set of free variables we actually use. In particular, when that set is empty, we only need to give values for closed terms. Since there may be no closed terms, this allows the domain to be empty.

So, as Eric Wofsey said as a comment in your other question, it's a bit of a "historical artifact". Quite possibly from something that was somewhat accidental. The core invalid rule, at least from the perspective of the natural deduction/sequent calculus, is doing universal elimination/existential introduction in a context where you have no terms. There are, of course, infinitely many derivable rules from this invalid rule that would also be invalid.

0
On

In empty structures, $\forall x Fx$ is vacuously true, $\exists x Fx$ is false. So $\forall x Fx \therefore \exists x Fx$ will count as invalid, if empty structures are allowed. Standard predicate logic takes this to be valid.

What to do?

The principles of logic aren't handed down, once and for all, on tablets of stone. Rather, choice of logical system involves cost-benefit assessments -- e.g. we buy ease of manipulation here at the cost of unnaturalness or artificiality there.

A familiar example: recall that in our standard formal language for predicate logic the basic form of quantifiers is unary ($\forall x Fx$) rather than binary (something like $\forall x (Fx : Gx)$). The cost of this is that we have to render restricted quantifications using connectives, so 'All $F$ are $G$' has to be rendered as $\forall x (Fx \to Gx)$. Warranting a syllogism in Barbara then comes to depend on the logic of conditionals.

Now, on reflection, isn't it implausible that we should understand Barbara as valid in virtue of something as contentious as the logic of conditionals?! Yet we normally cheerfully pay the price, thinking that the gain in simplicity we get by sticking with unary quantifiers is worth the cost in a certain artificiality in the rendering of restricted quantifiers.

Similarly, we standardly assume in our predicate logic that names all have references and domains are populated.That buys us some simplicity at the cost of a offending against the principle that logic should be topic neutral and the validity of inferences shouldn't depend on how the world actually is. And again, we normally cheerfully pay the price, thinking that the gain in simplicity is worth it (have a look at varieties of "free logic" to see what the cost is of different strategies for allowing denotation-less names and empty domains).

But, having said that, if we keep the usual assumption that names have denotations but we allow that domains can be empty for a signature that lacks names then you can get a tidy system of logic (obviously for a signature with names, if we are assuming names have denotations, we better have a populated domain for these denotations to live in!). If fact, the elementary textbook Logic by Wilfrid Hodges has a tableaux system of this kind.

But as he sets things up, this does lead to an odd result. In his system we apparently get

  1. $\forall x Fx \vdash Fn$
  2. $Fn \vdash \exists x Fx$
  3. $\forall x Fx \nvdash \exists x Fx$

Which looks like a failure of transitivity of entailment. So save transitivity, we'd have to say that $\forall x Fx \vdash \exists x Fx$ holds when working in a signature which has (denoting) names, but fails in a signature without names where the domain can be empty. But normally we think of an argument $\alpha \therefore \beta$ as depending for its validity only on the content of $\alpha$ and $\beta$.

You pays your money and you makes your choices. The conventional assumption of predicate logic (names have references, domains are populated) reflects the suppositions that are usually in play as we argue -- we suppose, at least temporarily for the sake of argument, that our talk isn't about nothing -- and buys simplicity of operation. For most purposes, the price is right. Or so goes the usual story.