Transitivity of Intuitionistic Propositional Logic

322 Views Asked by At

I've started working through Robert Harper's lectures on Homotopy Type Theory (here) and I have trouble understanding the solution given for one of the homework problems.

The task is to prove transitivity of IPL, i.e. that

$$\dfrac{\Gamma,\,\Gamma'\vDash\,P\,true \qquad \Gamma,\,P\,true,\,\Gamma'\,\vDash\, Q\,true}{\Gamma,\,\Gamma'\,\vDash\,Q\,true}$$

This is Task 4 of the homework (file includes solutions), in the example solution a proof by case is performed. I thought the solution was simple and I still don't understand why the following implication introduction would be wrong (sorry for the poor typesetting):

$$\dfrac{\Gamma,\,\Gamma'\vDash\,P\,true \qquad \dfrac{\Gamma,\,P\,true,\,\Gamma'\,\vDash\, Q\,true}{\Gamma,\,\Gamma'\,\vDash\, P\,true\supset\,Q\,true}}{\Gamma,\,\Gamma'\,\vDash\,Q\,true}$$

This seems intuitive to me in the same sense that one can go from

$a \, \wedge \, b \, \wedge \, c \leq d$

to

$a \, \wedge \, b \leq c \supset d$

I realize that this seems too simple and it doesn't use some of the properties mentioned in the assignment, but I don't see why it's wrong.

Another problem I have is that the provided solution makes a lot of use of inductive hypotheses, and I've done proofs by induction before (e.g. for natural number stuff) but I don't understand how to transfer those mechanisms to this context - I get the impression that Robert Harper means a different thing with induction than I'm used to. Is there a recursive kind of induction going on here, or is this really just a proof by case enumeration? And what is the role of the inductive hypothesis, how is it determined and used?

If someone could help me understand that would be greatly appreciated.

1

There are 1 best solutions below

3
On BEST ANSWER

I understand why you're confused! Perhaps Bob explains this in the lectures (I haven't watched the videos), but from the PDF notes and homework alone I don't see how you could guess what's going on here; and it's actually a really subtle thing that it took me a long time to even vaguely understand.

There are a lot of different ways of presenting logic as a formal system. Three of the most common are (1) Hilbert systems, (2) natural deduction, and (3) sequent calculus. Bob disparages Hilbert systems a bit on p9 of the week 1 notes; they do have a conceptual explanation in terms of combinatory logic and closed categories, but they're not relevant to the point here. Natural deduction and sequent calculus both work with "entailments" $\Gamma\vdash A$ separate from implication, making explicit a context of "hypotheses", and both divide the logical rules into ways to "use" things and ways to "prove" things; the differences are in the details.

The "prove" rules are called "introduction" rules in natural deduction and "right" rules in sequent calculus, but in both cases they look basically the same: they introduce a new connective in the consequent of the conclusion. (In $\Gamma\vdash A$ the $\Gamma$ is the antecedents or context or hypotheses while $A$ is the consequent, whereas in a rule $\frac{J_1 \quad J_2}{J_3}$ the judgments $J_1,J_2$ above the line are premises and the $J_3$ below the line is the conclusion.) By contrast, the "use" rules of natural deduction are called "elimination" rules and "get rid of" a connective in the consequent of a premise, whereas the "use" rules of sequent calculus are called "left" rules and introduce a new connective as an antecedent of the conclusion. Thus for instance the elimination rule for $\vee$ in natural deduction is

$$\frac{ \Gamma,A\vdash C \qquad \Gamma,B\vdash C \qquad \Gamma\vdash A\vee B }{\Gamma\vdash C} $$

whereas the left rule for $\vee$ in sequent calculus is

$$\frac{\Gamma,A\vdash C \qquad \Gamma,B\vdash C}{\Gamma,A\vee B \vdash C}$$

Thus, in natural deduction, to prove $C$ using a "case split" on $A\vee B$, you do three things: (1) prove $A\vee B$, (2) prove $C$ assuming $A$, and (3) prove $C$ assuming $B$. Whereas in sequent calculus, to prove $C$ using a case split on $A\vee B$, you have to already have $A\vee B$ as a hypothesis, and then you do only (2) and (3).

It follows that the "transitivity" or "using lemmas" rule (which is also called "composition", "substitution", and "cut") has a different status in the two cases. In natural deduction, transitivity is built into most of the rules: e.g. if you have a lemma that proves $A\vee B$, there's hardly anything to prove when claiming that such a lemma can be used in the middle of another proof to justify a case split, since the primitive rule for "using a $\vee$" says that you get to give a proof of $A\vee B$, so you can just slot in the proof of the lemma. In sequent calculus this is not the case: to justify using a lemma for $A\vee B$, you have to inspect the proofs of the lemma and of the theorem that wants to use the lemma and match up the corresponding parts inductively; this is called cut elimination or cut admissibility.

Overall, this difference means that sequent calculus is more "spartan": there are fewer proofs and less redundancy. In natural deduction, you can do all sorts of detours in a proof, introducing a new statement and then immediately eliminating it. Sequent calculus forbids that, mandating that new connectives be introduced (never eliminated) as you pass downwards through the proof; this has many nice metatheoretic consequences like the "subformula property". There is a formal way of "eliminating the redundancy" in natural deduction, called normalization: basically whenever you eliminate something that was introduced, you can excise that part of the proof. Such normalization corresponds (roughly) to the more involved process of cut-elimination in sequent calculus, and the "normal" natural deduction proofs correspond (again, roughly) to the proofs of sequent calculus. Under the Curry-Howard correspondence to $\lambda$-calculus, a natural deduction proof can contain "redexes" such as $(\lambda x.M)(N)$, which can be $\beta$-reduced to $M[N/x]$ (i.e. $M$ with $N$ substituted for $x$), whereas a sequent calculus produces the normal forms directly without allowing any redexes.

The solution given by Bob to the exercise you're wondering about, then, is the kind of solution that you would have to give in a sequent calculus, whereas the simple argument you suggest works only in a natural deduction, because it involves an elimination rule acting on a connective in a premise. In a sequent calculus, instead of the elimination rule for implication that you used, one has instead the left rule

$$ \frac{\Gamma\vdash A \qquad \Gamma,B\vdash C}{\Gamma,A\to B \vdash C}. $$

The sort of construction you used of an introduction rule followed by an elimination is exactly the sort of "redex" $(\lambda x.M)(N)$ that doesn't exist in a sequent calculus.

I don't mean the solution given is for a sequent calculus; it uses the natural deduction rules rather than the sequent calculus ones. But it's proving more than just the simple fact of transitivity: instead of $(\lambda x.M)(N)$ it's directly constructing an analogue of the substitution $M[N/x]$, thus incorporating some amount of "normalization" already. So it's no wonder you were confused, because the notes don't make this distinction, and the exercise doesn't specify what is intended. But perhaps it was made clearer in the lectures or by personal communication to the students.

Finally, the kind of induction Bob is doing that you may not be used to is called structural induction. Induction on natural numbers is just one kind of induction; you can do induction on any kind of "inductively generated" structure; here he is inducting on proofs, which are generated inductively from inference rules.