Why inclusion-exclusion principle fails for vector subspaces

262 Views Asked by At

Let $U,V,W$ be three vector subspaces of a same vector space.

It is well-known that $$\dim (U + V) = \dim U + \dim V - \dim (U \cap V)$$ works but $$ \dim(U +V + W) = \dim U + \dim V + \dim W - \dim (U \cap V) - \dim (U \cap W) - \dim (V \cap W) + \dim(U \cap V \cap W) $$ fails.

The counterexample is given by three distinct lines in $\mathbb{R}^2$.

Questions:

  1. (why failed) Philosophically vector spaces are completely determined by its basis. We have in-ex principle for three sets, in particular, the set of basis. Why the same in-ex principle fails for three vector spaces? (I notice some post and comments here but the failure of distributive law did not solve my doubt.)
  2. (how to fix) It is well-known that ({vector subspaces of $K$},$+$,$\cap$) forms a (bounded modular) lattice (c.f. wiki). ({subsets of $S$},$\cup$,$\cap$) is also a lattice. If I take $S$ to be a basis for $K$, then we get a free-forgetful adjunction pair that is lattice isomorphism. To be precise, $\langle S_1+S_2\rangle=\langle S_1\rangle+\langle S_2\rangle$, $\langle S_1\cap S_2\rangle=\langle S_1\rangle\cap \langle S_2\rangle$, where the angle bracket is the free span. In this way, I get isomorphic lattices. Then I suppose this is the correct form for the inclusion-excluision principle, i.e., under the condition that we have to fix a basis $S$ and $U,V,W$ must be the span of some subset of $S$? Any better form of in-ex principle for vector spaces is also welcome.