Intuition behind structural weakening

148 Views Asked by At

According to the Fitch deductive system I'm using in my logic class, the following is a valid inference:

Premise: Q

Step 1: Assume P

Step 2: Q by reiteration

Step 3: P → Q by conditional intro.

Basically it seems that, given the truth of any arbitrary atomic sentence, you can deduce that any other arbitrary atomic sentence, if also true, implies the first sentence. And it is this result that I'm having trouble wrapping my head around. Couldn't it be the case that two sentences can be true without any relation of implication holding between them? This certainly seems to be the case in informal reasoning and with facts about the world, so what's the rationale behind this being a valid step in propositional logic?

Edit: the initial title ("motivation for entailment in propositional logic") was misleading - this question is more about structural weakening than it is the material conditional or entailment per se. I made this mistake beacuse I myself wasn't clear on weakening until the question was answered.

3

There are 3 best solutions below

0
On

I do not know quite well the fitch style proof, but in propositional logic it is easy to check that for whatever P, it implies Q->P. If you make the right truth table, it will be clear. Even if this is not quite intuitively appealing, it depends on how the material conditional is defined: P->(Q->P) can never be false. If it were, P should be true and the consequent false. But since P is true, Q->P can't be false. Therefore, P->(Q->P) must be true.

0
On

The key point is this:

Couldn't it be the case that two sentences can be true without any relation of implication holding between them?

That's quite true - and weakening does not assert a relationship, at least a meaningful one, between $P$ and $Q$!

Think of classical logic as describing the world as it happens to be. Classical logic doesn't "see" anything more complicated than what happens to be true and what happens to be false; things like causal relationships between facts are simply not treated by it. In particular, "$P\rightarrow Q$" just means "If it happens to be the case that $P$ is true then it happens to be the case that $Q$ is true." If we already know that $Q$ happens to be true, then of course there's no information gained by asserting $P\rightarrow Q$. But that doesn't mean that it would be improper for us to assert $P\rightarrow Q$, it only means that it would be a bit pointless to do so.

(On that note, the deduction rules of classical logic are created purely to characterize what operations on (sets of) sentences preserve "happens-to-be-tru"th. This is why we admit "From $P$ and $\neg P$, deduce $Q$" as a rule despite the absurdity of its premise: it is in fact exactly the absurdity of its premise that motivates its acceptance, since the fact that its premise can't occur ensures that that rule can never be used to accidentally conclude a falsehood!)


Now an editorial:

To my mind this is actually a strength of classical logic: working with it reveals the subtleties of notions like cause and effect and forces us, if we want to treat them rigorously, to really make clear what we mean by them. That is, its "starkness" enables us to use it as a jumping-off point for treating more nuanced notions without smuggling in preconceptions about them accidentally.

To see what I mean, consider modal logic (using the Kripke frame semantics for simplicity). Roughly speaking, in modal logic we consider not just a single "actual world" but a whole collection of "possible worlds." We can then (try to) frame causal relationships as facts across worlds: "$A$ causes $B$" means "In all possible worlds we have $A\rightarrow B$." This is quite different from the proposition "$A\rightarrow B$" holding at the actual world we're in, but perhaps failing at other "nearby" worlds. But while modal logic is at first a step away from classical logic, it can in fact be folded back into classical logic by broadening the notion of "world." Kripke frames are themselves classical mathematical objects, and so we can put the whole perspective above inside an overarching classical framework. Classical logic, that is, can serve as a context for studying non-classical logics! Of course it's not unique in this respect - any logic worth its salt can do the same - but the starkness of classical logic, in my opinion, keeps it "maximally neutral" in this role. I've said a bit more about this here.

I'm actually ignoring a huge issue here, namely the leap from propositional to predicate logics, but that's a whole separate thing which ultimately it doesn't affect the general picture I'm trying to paint here, so for now ignore it.

0
On

Ok, since you figured out yourself that it is structural weakening that allows reiteration (in a Fitch system), your question became what is the motivation for allowing weakening (in classical logic).

The motivation is simply that to disallow weakening means not treating the premises in a sequent like a set anymore! So there are two (somewhat equivalent) ways of doing this: explicitly introduce another (sequent-level) conjunction, different from the "normal one"; this 2nd conjunction is sometimes called fusion. (It's used in relevance logics). Alternatively, you can write sequent premises "the usual way" (with comma), but interpret them as multisets, so that $A, A$ is not the same as $A$.

The "intuitive" counter-argument to either of these approaches is that it surely makes your logic harder to work with. But of course if relevance is your goal, that was considered a price worth paying in the school of Anderson, Belnap, etc.

There are some deeper results that argue against this kind of logic being foundational in math, e.g. "relevant [Peano] arithmetic" can't prove some quadratic reside formulas stemming from the Chinese remainder theorem. This follows from the fact that the ring of complex numbers is a valid model for "relevant [Peano] arithmetic".