As I was working my way through Steven Roman's Advanced Linear Algebra, I came across this in Chapter 11, exercise 1.
Let $U,W$ be subspaces of a metric vector space $V$. Show that
(a) $U\subseteq W \implies W^\bot\subseteq U^\bot$
(b) $U\subseteq U^{\bot\bot}$
(c) $U^\bot = U^{\bot\bot\bot}$
Additionally, $(U+W)^\bot = U^\bot \cap W^\bot$ and $U^\bot + W^\bot \subseteq (U\cap W)^\bot$. I'm having some trouble with the reverse inclusion (at least it's not as obvious as the other ones were).
All of these properties look strikingly similar to the behavior of connectives in intuitionistic logic, e.g. $p\rightarrow q \vdash \neg q\rightarrow \neg p$, $\vdash p \rightarrow \neg\neg p$, $\vdash \neg p \leftrightarrow \neg\neg\neg p$, $\vdash \neg(p \lor q) \leftrightarrow \neg p \land \neg q$, $\vdash \neg p \lor \neg q \rightarrow \neg(p\land q)$.
Is there a connection here? If so, what is it?
I don't know the details, but I've always thought something like this was part of the motivation for linear logic -- where the negation is even notated $p^\bot$ rather than $\neg p$, and we have connectives $\otimes$ and $\oplus$ that seem to correspond to tensor product and direct sum of vector spaces in a nice way.
In particular, intuitionistic linear logic seems to be closer to linear algebra than classical linear logic is.