Associativity of $\iff$

3.5k Views Asked by At

In this answer, user18921 wrote that the $\iff$ operation is associative, in the sense that

  • $(A\iff B)\iff C$
  • $A\iff (B\iff C)$

are equivalent statements.

One can brute-force a proof fairly easily (just write down the truth table). But is there a more intuitive reason why this associativity must hold?

4

There are 4 best solutions below

4
On BEST ANSWER

I would say that I personally don't feel that this equivalence is that intuitive. (Maybe it is because I have been thinking too constructively these days.) In classical logic, we have the isomorphism with the Boolean algebra, and that makes it possible to think of $A \leftrightarrow B$ as $A \oplus B \oplus 1$ where $\oplus$ is the exclusive OR. This makes quite a short proof of the associativity of the biconditional. To me, this is the most intuitive way I can think of to justify the associativity.

But as J Marcos mentioned, the equivalence is no longer true in intuitionistic logic. This may serve as an explanation why the equivalence is not supposed to be as intuitive.

(It is straightforward to find a counterexample using the topological model. I will just work out the tedious details for you. Assume $\mathbb R$ is the topological space of interest with the usual topology. Define the valuation $[\![\cdot]\!]$ by $[\![A]\!] = (-1, 1)$, $[\![B]\!] = (0, \infty)$ and $[\![C]\!] = (-\infty, -1) \cup (0, 1)$. It follows that \begin{align*} [\![A \leftrightarrow B]\!] & = [\![A \rightarrow B]\!] \cap [\![B \rightarrow A]\!] = \text{int}([\![A]\!]^c \cup [\![B]\!]) \cap \text{int}([\![B]\!]^c \cup [\![A]\!]) \\ & = (-\infty, -1) \cup (0, 1) = [\![C]\!] \\ [\![(A \leftrightarrow B) \leftrightarrow C]\!] & = [\![(A \leftrightarrow B) \to C]\!] \cap [\![C \to (A \leftrightarrow B)]\!]\\ & = \text{int}([\![A \leftrightarrow B]\!]^c \cup [\![C]\!]) \cap \text{int}([\![C]\!]^c \cup [\![A \leftrightarrow B]\!]) = \mathbb R \\ [\![B \leftrightarrow C]\!] & = [\![B \rightarrow C]\!] \cap [\![C \rightarrow B]\!] = \text{int}([\![B]\!]^c \cup [\![C]\!]) \cap \text{int}([\![C]\!]^c \cup [\![B]\!]) \\ & = (-1, 0) \cup (0, 1) = A - \{0\}\\ [\![A \leftrightarrow (B \leftrightarrow C)]\!] & = [\![(B \leftrightarrow C) \to A]\!] \cap [\![A \to (B \leftrightarrow C)]\!] \\ & = \text{int}([\![B \leftrightarrow C]\!]^c \cup [\![A]\!]) \cap \text{int}([\![A]\!]^c \cup [\![B \leftrightarrow C]\!]) \\ & = \mathbb R - \{0\} \ne \mathbb R \end{align*} where int is the interior and $^c$ is the complement.)

2
On

Do you find it intuitive that addition modulo 2 qualifies as associative? Do you find it intuitive that two-valued negation N can qualify as an isomorphism between concrete algebras ({0, 1}, X) and ({0, 1}, Y) where X and Y indicate binary operations? Well, if you answer yes to both questions, then if you let E-2 indicate logical equivalence, from the associativity of addition modulo 2 the negation isomorphism gives you that E-2 associates also.

In a paper called The Equivalential Calculus (you can read the paper in either of the volumes Polish Logic 1920-1939 or Jan Lukasiewicz: Selected Works) J. Lukasiewicz writes "Now, I long ago noted that equivalence is also associative, and accordingly I established the following thesis in the symbolism of Principia p≡.q≡r:≡:p≡q.≡r This thesis, which in my symbolism can be expressed by EEpEqrEEpqr, is cited by Tarski in his doctoral thesis of 1923...

A2. p≡.q≡r:≡:p≡q.≡r

The second axiom [A2] is the law of associativity for equivalence, discovered by myself."

Unfortunately though Lukasiewicz does not give any reference to any sort of paper where he first discovered this. So, I have no idea as to how he figured it out.

From a natural deduction framework, I do not think the associativity of logical equivalence intuitive at all, even if you know one particular natural deduction system fairly well. I tried to prove it once without any derived rules of inference, and then found a shorter proof. The shorter proof took up 141 lines. Someone else tried proving it with derived rules of inference, and proved one of the required conditionals in 47 lines.

In what gets called Lukasiewicz 3-valued logic logical equivalence takes on the value of truth along the diagonal, the value of falsity at the other corners not covered so far, and the third truth value elsewhere. In other words, where 0 indicates falsity, .5 the third truth value, and 1 truth we have the following matrix for logical equivalence E:

  E   0  .5  1
  0   1  .5  0
  .5  .5  1  .5 
  1   0  .5  1

In reverse Polish notation logical equivalence goes pqrEEpqErEE. So, let p=.5, q=.5, and r=0. Then we obtain:

.5 .5 0 E E .5 .5 E 0 E E= .5 .5 E 1 0 E E=1 0 E=0.

So, in Lukasiewicz 3-valued logic, the associativity of logical equivalence can take on the falsity value "0" and thus doesn't even qualify as a quasi-tautology (a quasi-tautology never takes on the value of falsity, the law of Clavius CCNppp comes as a quasi-tautology in Lukasiewicz 3-valued logic).

1
On

This might be one of the situations where a more general (and therefore stronger) result is more intuitive (depending on how your intuition works). The result I have in mind is that, if you combine any list $\mathcal L$ of formulas (not just three like the $A,B,C$ in the question) by $\iff$, then no matter how you put the parentheses to make it unambiguous, the resulting formula is true if and only if an even number of the formulas in the list $\mathcal L$ are false. In other words, iterated $\iff$ amounts to a parity connective, regardless of the placement of parentheses.

4
On

The reason it's hard to intuit this is that we're used to thinking of "$(A\Leftrightarrow B)\Leftrightarrow C$" as stating something about the sentences "$A\Leftrightarrow B$" and "$C$", but nothing about "$A$" (or "$B$") directly: converting it to a sentence, "$A\Leftrightarrow(B\Leftrightarrow C)$", that speaks about "$A$" directly, is counterintuitive.

So to understand the equivalence between "$(A\Leftrightarrow B)\Leftrightarrow C$" and "$A\Leftrightarrow(B\Leftrightarrow C)$", we need to think of them not as sentences about "$A\Leftrightarrow B$" and "$C$", respectively "$A$" and "$B\Leftrightarrow C$", but as sentences about "$A$", "$B$", and "$C$" directly.

What does "$(A\Leftrightarrow B)\Leftrightarrow C$" say about "$A$", "$B$", and "$C$"?

  1. If "$C$" and "$A$" hold, so does "$B$".
  2. If "$B$" and "$C$" hold, so does "$A$".
  3. If "$A$" and "$B$" hold, so does "$C$".
  4. If "$A$" and "$B$" both are false, "$C$" holds.

So the crucial point in understanding the equivalence between "$(A\Leftrightarrow B)\Leftrightarrow C$" and "$A\Leftrightarrow(B\Leftrightarrow C)$" is understanding the equivalence between

4. If "$A$" and "$B$" both are false, "$C$" holds.

and

4$'$. If "$B$" and "$C$" both are false, "$A$" holds.

(which is clear).

The trick here was looking only at the direct statements (e.g. if "$C$" and "$A$" hold, so does "$B$"): their contrapositives (e.g. if "$C$" holds but "$B$" does not, then "$A$" is false) are less intuitive.