Why not ban nested quantifiers over the same variable?

1.5k Views Asked by At

People seem to think (1, 2, 3) that a wff can have nested quantifiers over the same variable, e.g., $\forall x(Px \wedge \exists x Qx)$.

However, consider the following argument:

  1. $\forall x \exists y Pxy$ (Premise)
  2. $\exists y P\hat{x}y$ (1, $\forall$E)
  3. $\forall y \exists y Pyy$ (2, $\forall$I)

This is clearly not valid, but what is wrong with it? According to wikipedia, the application of $\forall$I is wrong because $y$ occurs in 2.

Here is another invalid argument:

  1. $\forall x Pxc$ (Premise)
  2. $\exists x \forall x Pxx$ (1, $\exists$I)

This is considered invalid for a similar reason: the application of $\exists$I is wrong because $x$ occurs in 1.

But we wouldn't need these restrictions if we just banned nested quantifiers over the same variable. This seems like a simpler rule and we're not losing any expressive power. Does anyone set up predicate logic this way?

5

There are 5 best solutions below

1
On

The issue is that that kind of rule requires awareness of the contents of smaller wffs. Think about the other rules - none of them say anything about "if such-and-such a symbol appears inside...". The rule you're proposing would look something like this: "If $\varphi$ is a wff, $v$ is a variable, and $\varphi$ does not include the strings '$\forall v$' or '$\exists v$', then $\forall v\varphi$ and $\exists v\varphi$ are wffs." This is significantly more complicated than any of the standard rules - and, because it "looks inside" $\varphi$, it makes standard techniques like induction on complexity of formula significantly more difficult. Think of it like a proof by induction (in ordinary arithmetic) where for some reason you had to use different rules whenever $n$ had a $2$ in its decimal expansion. Technically it wouldn't be a real obstacle - as long as the alternate rules had the same amount of power, you could still prove what you needed to prove - but it'd be super inconvenient.

It also wouldn't actually help the situations you're looking at. The arguments you presented would still be wrong, and the only reason would still be "because you can't do that"; it would just be "you can't write that sentence" instead of "you can't use that rule like that". So, to summarize: adding that rule to the construction of wffs would come at the cost of simplicity in a lot of proofs in the meta-language, and wouldn't improve the situation enough to make the difference.

0
On

In your first example $y$ does not occur free in step 2 and in your second example $x$ does not occur free in step 1, so the problem is not with the variable occurrences in the antecedents to the inference steps. What is wrong in both examples is that you have not done the substitutions correctly when deriving the succedents of the inference step. When you substitute $y$ for $\hat{x}$ in $\exists y P\hat{x}y$ you need to rename the bound variable $y$ to avoid it being "captured" by the substitution. The result should be something like $\exists y'~Pyy'$.

1
On

In Barendregt's Lambda Calculus book, he sets up a variable convention.

Recognizing that the terms $\lambda x.x$ and $\lambda y.y$ are $\alpha$-convertible, and in some sense "the same", we say that in any proof involving finitely/countable many terms, we $\alpha$-convert them so that the bound variables are all distinct, and distinct from the the free variables.

This avoids free variables being bound in substitutions.

I'm not that familiar with the logic literature, but I wouldn't be surprised if a similar convention exists.

The effect would be that a formula like $\forall y \exists y P(y,y)$ is indeed well-formed but for clarity we would instead write the "equivalent" formula $\forall x \exists y P(y,y)$.

2
On

It could be done, as described in the other answers.

But it would not make capture-avoiding substitution easier to define, so there's no real benefit.

On the other hand, we would lose the sometimes useful property that a sentence can be substituted into any context without needing to look inside it.

It would also be more difficult to define local rewrites of a formula that introduce a new variable and its quantifier somewhere in the parse tree. Instead of just looking down from the new quantifier's location to make sure it won't capture anything we'd also need to look up to make sure it won't shadow anything. Needing to "look in both directions" might heavily complicate the structure of an inductive definition.

0
On

Nesting of quantifiers is not exactly the issue with the elimination-introduction confusion.

Attempt the argument actually using the supposed problem child: $\forall x\,(Px \wedge \exists x\,Qx)$

  1. $\forall y\,(Py \wedge \exists x\,Qx)$ as premised.

  2. $Pa \wedge \exists x\,Qx$ via $\forall$ elimination to arbitrary $a$.

  3. $\forall x\,(Px \wedge \exists x\,Qx)$ via $\forall$ introduction from arbitrary $a$.

This is no problem.   When introducing an universal quatifier, you may safely alpha-replace the arbitrary witness, $a$, with any variable which does not occur free inside any shared scope.   Here, $x$ does not occur free, as it is bound by the existential.

Your original argument did have $x$ occuring inside a shared scope.   It was this that caused the issue, not the nesting of quantifiers.