I've recently been looking into linear logic, and it seems every source I can find on it uses the sequent calculus proof system. However, I personally find the sequent calculus to have numerous downsides:
- All the branching means it takes up a ton more horizontal space compared to the mostly vertical Fitch proofs. This is something I find annoying both on paper and on computer.
- Stylistically, I much prefer working from the premises to the conclusion in the object language rather than reducing a metalanguage representation of the proof.
- It's a lot harder to wrap my head around all the new operators, especially since many linear logic resources also use duality principles to move everything onto only the right-side, making it even less clear what I'm actually trying to do. (Even in sequent calculus for classical logic, I already struggle with having multiple formulas on the RHS of the turnstile, let alone putting everything there!)
For these reasons and more, I was wondering if it would be possible to work with linear logic using a Fitch-style natural deduction system instead of the sequent calculus. I understand that the notion of resource expenditure might make it more complex, but surely that could be remedied by crossing off statements once they have been used in an inference?
I’ll demonstrate a problem that one would run into while developing a Fitch-style ND system for Linear Logic. Observe the following argument:
It looks like we should be able to use Modus Ponens on lines 1 and 4, and then use $⊸$ Intro to prove $p⊸(p⊗p)$, but this is clearly invalid in Linear Logic. So, it seems that since the instance of $p$ assumed in line 1 was used to get $p⊗p$ on line 3, we can’t use it for Modus Ponens. However,
$p \vdash p⊸(p⊗p)$
$p⊸(p⊸(p⊗p))$ are both valid.
There may be other necessary restrictions, but this is the one that is most obvious to me.