I was wondering about this form of logical proof in the book: Programming in the 1990s, by Edward Cohen.
To prove $P \implies Q$, the author uses this sequence:
\begin{align*} Q \\ \equiv\langle\text{Assuming P}\rangle \\ true \end{align*}
Could someone explain what’s the thought behind it ?
In an "equational-style" proof, to prove a formula $A$ amounts to prove $A \equiv \text {true}$.
See David Gries & Fred Schneider, A Logical Approach to Discrete Mathematics (Springer, 1993) for a more clear explanation.
See page 72, the:
How to apply it to your proof ?
We have an assumption $P$ and we prove - under that assumption - that $Q \equiv \top$ [using $\top$ for $\text {true}$].
Using the deduction Theorem, we conclude with $P \Rightarrow (Q \equiv \top)$.
Then we need the Distributivity law of $\Rightarrow$ over $\equiv$:
With $\top$ in place of $R$, our formula above becomes: $(P \Rightarrow Q) \equiv (P \Rightarrow \top)$.
But $P \Rightarrow \top$ is $\top$; thus, we have arrived at:
and this means that $P \Rightarrow Q$ is a theorem.