Restrictive Rules for LK System

201 Views Asked by At

I have a question regarding the restrictive nature of $\forall(R)$ and $\exists(L)$ rules in sequent calculus LK. I don't really understand why the restrictions exists in the first place, so why:

$$ \frac{\Gamma \to \Delta, A(b)}{\Gamma \to \Delta, \forall x A(x)} $$

Does this have the restriction that the variable $b$ cannot be found the in the conclusion (similarly for $\exists$ left). This is coming from having trouble understand why $\forall x \exists y Pxy \to \exists y \forall x Pxy$ is invalid. But if I understood the restriction maybe I would understand why that sequent is invalid! Thank you!

2

There are 2 best solutions below

3
On BEST ANSWER

If $b$ is in $\Gamma$ or in $\Delta$, then the occurrence of $b$ in $A$ might depend on some formulas of $\Gamma$ or $\Delta$. If such a dependency exists, you cannot just replace $A(b)$ with $\forall x A(x)$.

$\forall x A(x)$ means that A is valid for any $x$ (so no dependencies). Now, say $b$ occurs in some formulas of $\Gamma$ or of $\Delta$. Then the occurrence of $b$ in $A$ might depend on other variables, which means that $A$ is not valid for all $b$, but for just some $b$. Therefore, you can't conclude that $A$ is valid for all $x$, where $x$ substitutes $b$.

Briefly, we want $b$ not to occur free in $\Gamma$ or in $\Delta$ because else, $R\forall$ extinguishes a possible dependency on $b$. In a dual way, we need this constraint for $L\exists$ too, because this rule might create non-existent dependencies. You can see this at the "deduction" below, for the example sequent.

$\forall x \exists y Pxy \to \exists y \forall x Pxy$ is indeed invalid. In formula $\forall x \exists y Pxy$, $y$ might depend on $x$, while in formula $\exists y \forall x Pxy$, variable $y$ is common forall $x$.

If there weren't those constraints for $L\exists$ and $R\forall$, a deduction for $\forall x \exists y Pxy \Rightarrow \exists y \forall x Pxy$ in LK would be $$Pxy\Rightarrow Pxy\\ \exists yPxy\Rightarrow Pxy\\ \exists yPxy\Rightarrow \forall x Pxy\\ \forall x\exists yPxy\Rightarrow \forall x Pxy\\ \forall x\exists yPxy\Rightarrow \exists y\forall x Pxy$$

The rules applied are $L\exists, R\forall, L\forall$ and $R\exists$ in this order.

Now, if $b$ occurs free in a formula $B$, where $B\in\Gamma$, then $\forall b B(b)$ wouldn't remove or create any dependencies on $b$, since $\forall b B(b)$ is valid when $b$ has only free occurrences in $B$ and $B$ is an assumption. It does matter the fact that $B$ is an assumption, because dependencies are built on the variables of the conclusions from the assumptions. Since $B$ is an assumption and not a conclusion, there is no dependency on $b$ in $B$.

For the case of $R\exists$, if $b$ is free in a formula of $\Gamma$, then $\exists bA(b)$ wouldn't create or remove any dependences on $b$ in $A$. $\exists b A(b)$ is valid if $b$ has only free occurrences in $A$ and $A$ is a conlusion (hence, it cannot create any dependencies).

To sum up...

Quantifier $\forall$ can be considered as an "operator" on variables which removes dependencies and quantifier $\exists$ as an "operator" that creates them. But only assumptions create dependences and only conclusions can remove them. So, applying $\forall$ in the assumptions makes no harm (since assumptions can only create dependencies and $\forall$ is an "operator" that can only remove dependencies). Dually, applying $\exists$ in a conclusion makes no harm.

  • For a dependency to be created, you need quantifier $\exists$ on an assumption. Therefore,
    • $L\exists$ might create dependencies.
    • $R\exists$ can't create any dependecies.
  • For a dependency to be removed, you need quantifier $\forall$ on the conclusion. Therefore,
    • $R\forall$ might remove dependencies.
    • $L\forall$ can't remove any dependencies.
3
On

I think that, in order to have a simple explanation of why you need the restriction in order to avoid troubles, you must think at the "context", i.e. $\Gamma$ and $\Delta$: they are set of formulas.

Assume now that we are working in formal arithmetic, and consider a formula like $Even(x)$ [that is "formally" : $\exists k (x = 2 \times k)$, i.e. $\exists k ( x = S(S(0)) \times k)$].

Consider the case that $\Gamma$ is made of a single formula, and write the sequent $\Gamma \Rightarrow \Gamma$, i.e. :

$Even(b) \Rightarrow Even(b) \quad $ [note that $b$ is free in $\Gamma$].

Note. In some systems this is an axiom; in other not, because they restrict axioms to sequent $A \Rightarrow A$ with $A$ atomic. But there is no real difference; if you admit the more "liberal" form, with $A$ non atomic, it is easy to prove by induction that if a sequent is provable, then it is provable with a proof in which all the initial sequents consist of atomic formulas [see Gaisi Takeuti, Proof Theory (2nd ed - 1987), page 28].

Now, we apply the rule $\forall$-right to get :

$Even(b) \Rightarrow \forall x Even(x)$.

What happened ? That from an "innocuous" assumption we have derived that, if $x$ is even (and for sure there are even numbers), then all numbers are even ... and for sure this is not the case.

Why this fallacy ? because we have violated the restriction : $b$ must not be free in $\Gamma$ or $\Delta$ or $\forall x A(x)$.

The case with $\exists$-left is the same. Having $A(y)$ to the left of $\Rightarrow$ means that it is false; concluding with $\exists x A(x)$ to the left of $\Rightarrow$ again means that $\exists x A(x)$ is false. But you can correctly conclude that there are no objects in the domain for which $A(x)$ holds, only if the $y$ you used is "fresh".

If in your proof you have some other formulas that make use of it (i.e. $y$ is free in $\Gamma$ or $\Delta$), it may happens that for the $y$ that satisfy those additional "conditions" $A$ does not hold, and still there are "other" objects in the domain such that $A(x)$ holds of them.

Note. I've specified : $b$ not free, because I've treated $b$ as a free variable [see Sara Negri & Jan von Plato, Structural Proof Theory (2001), page 67]. If you are working in Gentzen's tradition, $b$ is a parameter: so it cannot be used with quantifiers. In this case, it is correct to say that : $b$ does not occur in the lower sequent [see Gaisi Takeuti, Proof Theory (2nd ed - 1987), page 10].

The above intuitive considerations in support of the unavoidability of the restriction, can be made more formal through semantics.

Please, refer to Negri & von Plato's book [page 81]: in brief, they use a semantics based on valuations, i.e. boolean valuations for proporitional logic extended to quantifiers; for simplicity, I will state only the rule for $\forall$ :

a valuation is a function $v$ from formulas of predicate logic to the values $0$ and $1$, assumed to be given for all atoms :

$v(P^n(x_i, ..., x_j)) = 0$ or $v(P^n(x_i, ..., x_j)) = 1$,

and extended inductively to all formulas [in the usual way, i.e. : $v(\bot) = 0$, ..., and]

$v(\forall x(A)) = inf(v(A(x_i/x)))$.

The fundamental definition is :

A sequent $\Gamma \Rightarrow \Delta$ is valid iff $min(v(\varphi \in \Gamma)) \le max(v(\varphi \in \Delta))$

[it sound complicated, but remember that $v(A \land B) = min (v(A), v(B))$ and $v(A \lor B) = max (v(A), v(B))$; so it is straightforward].

Consider again our formula $Even(b)$; if we have as domain $\mathbb N$, with a "sound" valuation, such that $v(Even(2)) = 1$, $v(Even(3)) = 0$, and so on.

Now, what will happen when we assign to our parameter $b$ as denotation an even number (e.g.$8$) ?

We will have that the upper sequent $Even(b) \Rightarrow Even(b)$ will be oviously valid [$1 \le 1$], but with the lower sequent $Even(b) \Rightarrow \forall x Even(x)$ our valuation will give us obviously : $v(Even(b)) = 1$ and $v(\forall x Even(x)) = min_i(Even(x_i/x)) = 0$.

So, without restriction, our rule is not sound.