Hidden Dependencies With Mixed Quantifiers

251 Views Asked by At

I just read about hidden dependencies in Barwise's LPL book, and have a questions about certain proofs.

I understand the idea that using existential instantiation in a predicate that has an arbitrary constant depends on that constant, and so you can't use universal generalization on that predicate. But what about statements that are obviously true like $$\exists y\in \mathbb{R} \,\forall x \in \mathbb{R}(x+y=x)$$

Would the proof for this statement violate the hidden dependency rules that say universal generalization after an existential instantiation that depends on the constant to be generalized is not allowed?

2

There are 2 best solutions below

6
On BEST ANSWER

The proof for this statement is simply about showing the existence of such a $y$ for which the statement that $\exists y \forall x (x+y = y)$.

In other words, is asking if there exists an additive identity $y\in \mathbb R$ such that for all $x\in \mathbb R(x+y=x).$

Well, we do know that $$\forall x \in \mathbb R\,(x+ 0 = x).$$

So there does in fact exist a $y \in \mathbb R$ such that for all $x\in \mathbb R$, it is true that $x+y=x$, because we can use $0$ as a witness for $y\in \mathbb R$.

That leaves us with only $$\forall x \in \mathbb R( x+0 = x)$$

0
On

[Aside: I'm increasing becoming of the opinion that "illustrating" logic with natural language sentences, especially "every day" sentences, is harmful. The connections between natural language and logic are quite complex, and one should know logic before trying to do this, and of course it's completely unnecessary if you're not interested in linguistics. I don't have a problem with reading formal expressions into natural language, but the image of that is a highly stylized subset of natural language.]

I strongly prefer a sequent-style presentation of logic rules/proofs (with proof objects but I won't go that far here) as opposed to the vertical nested lists of Fitch style. This is mostly a superficial distinction, but I find it makes it easier to discuss the rules of logic. Basically, you write out the lists horizontally instead of vertically, e.g. $$\begin{align} & |\ 1.\ P & & \\ & \underline{|\ 2.\ Q} &\text{ becomes } &\quad P, Q \vdash R\\ & |\ 3.\ R & & \end{align}$$ Usually, the list of premises "$P, Q$" is called $\Gamma$ leading to $\Gamma \vdash R$. Then $\Gamma, S \vdash R$ would mean $P,Q,S \vdash R$. Rules are then presented vertically, e.g. conjunction introduction: $$\frac{\Gamma\vdash P \qquad \Gamma\vdash Q}{\Gamma\vdash P\land Q}$$ This reads as "if $\Gamma \vdash P$ and $\Gamma \vdash Q$ then $\Gamma \vdash P\land Q$". This corresponds to the nested lists in Fitch-style. The benefit of Fitch-style is that we don't have to keep copying the premises (i.e. $\Gamma$) which is good for actually doing proofs. The downside in that it is awkward to refer to the whole collection of premises (the context, i.e. $\Gamma$) which is bad for explaining some proof rules. Here is the rule for universal generalization in sequent-style: $$\frac{\Gamma \vdash P \qquad x \notin \text{FV}(\Gamma)}{\Gamma\vdash\forall x.P}$$ where $\text{FV}(\Gamma)$ means the set of free variables of $\Gamma$ which is just the union of the set of free variables of each predicate in $\Gamma$. With this, it's very easy to see what goes wrong with a proof of say $\exists y.\forall x.x+y=0$ that doesn't go wrong with $\forall x.\exists y.x+y=0$ or with $\exists y.\forall x.x + y = x$.

Let's start with the true statement $\forall x.\exists y.x+y = 0$. Writing $\cdot$ for the context with no premises, we get the proof: $$\cfrac{\cfrac{\cfrac{\cfrac{}{\cdot\vdash 0=0}}{\cdot\vdash x+(-x)=0}}{\cdot\vdash\exists y.x+y=0}\qquad \cfrac{}{x\notin\text{FV}(\cdot)}}{\cdot\vdash\forall x.\exists y.x+y=0}$$

Next consider the following proof of $\exists y.\forall x.x+y=x$: $$\cfrac{\cfrac{\cfrac{\cfrac{}{\cdot\vdash x=x}}{\cdot\vdash x+0=x}\qquad \cfrac{}{x\notin\text{FV}(\cdot)}}{\cdot\vdash\forall x.x+0=x}}{\cdot\vdash \exists y.\forall x.x+y=x}$$

Now see what happens when we try to do something along the lines of the first proof for $\exists y.\forall x.x+y=0$: $$\cfrac{???}{\cdot\vdash\exists y.\forall x.x+y=0}$$ Immediately, we seem to be stuck. We'd like to substitute $(-x)$ for $y$, but $x$ isn't in scope and attempting to do such a substitution anyway would lead to something like $\forall z.z+(-x) = 0$ because capture-avoiding substitution would require us to rename the variable bound by $\forall$. But we're clever, we continue as follows: $$\cfrac{\cfrac{\cfrac{\cfrac{\cfrac{\cfrac{}{z = -x\vdash 0 = 0}}{z = -x\vdash x + (-x) =0}}{z = -x\vdash x + z = 0}\qquad x\notin\text{FV}(z = -x)}{z = -x\vdash \forall x.x+z = 0}}{z = -x\vdash\exists y.\forall x.x+y=0}}{\cdot\vdash\exists y.\forall x.x+y=0}$$ This proof almost succeeds, but it fails the side-condition on universal generalization that $x\notin\text{FV}(z = -x)$. That is, $z$ has a (now not so) "hidden" dependency on $x$. This makes complete sense. If we have a premise involving $x$, here $z = -x$, then $x$ isn't completely unconstrained. If we rewrite the second proof to be more like this failed proof, we see the difference: $$\cfrac{\cfrac{\cfrac{\cfrac{\cfrac{}{z=0\vdash x=x}}{z=0\vdash x+0=x}}{z=0\vdash x+z=x}\qquad \cfrac{}{x\notin\text{FV}(z=0)}}{z=0\vdash\forall x.x+z=x}}{\cfrac{z=0\vdash\exists y.\forall x.x+y=x}{\cdot\vdash \exists y.\forall x.x+y=x}}$$