A proof for bivector decomposition

137 Views Asked by At

Let $V$ be an $n$-dimensional vector space over some field $\mathbb F$. I'm interested in the following result:

For every bivector $\alpha\in\bigwedge^2 V$, there exists a linearly independent set $S=\{\sigma_1,\ldots,\sigma_{2r}\}$ such that $$\alpha = \sigma_1\wedge\sigma_2 + \sigma_3\wedge\sigma_4 + \ldots + \sigma_{2r - 1}\wedge\sigma_{2r}.$$

Note that this is a sum of $r\le\frac{n}{2}$ terms, not $\dim(\bigwedge^2 V)=\binom{n}{2}$.

When $\mathbb F = \mathbb R$, I've seen this result mentioned without proof throughout many sources, even with the added condition that the vectors in $S$ are orthogonal. However, I haven't been able to find a proof, nor any mention of what happens over other fields. To make matters worse, most sources seem to restrict their attention to $V=\mathbb R^3$, where this is a corollary of the simpler statement that over an $n$-dimensional vector space, an $(n-1)$-form is decomposable.

Question

  1. How is this result proven in the general case?
  2. Does it generalize to arbitrary fields? If not, what's a counterexample?

Thoughts

This result reminds me of the fact that for $T:\mathbb R^n\to\mathbb R^n$ an orthogonal transformation, the space $\mathbb R^n$ can be decomposed into one or two-dimensional $T$-invariant subspaces. So this doesn't make me hopeful for a generalization to other fields. Even using this, I can't find a way to link this result to the bivector theorem.

3

There are 3 best solutions below

2
On BEST ANSWER

This is obviously true in $2$ dimensions. For induction, suppose it's true in $n-1$ dimensions; we'll see that it's also true in $n$ dimensions.

Let $(e_1,\cdots,e_n)$ be a basis for $V$, and let $W$ be the span of $(e_1,\cdots,e_{n-1})$, and write your bivector as

$$\alpha=\sum_{1\leq i<j\leq n}c_{i,j}\,e_i\wedge e_j$$ $$=\sum_{1\leq i<j\leq n-1}c_{i,j}\,e_i\wedge e_j+\sum_{1\leq i\leq n-1}c_{i,n}\,e_i\wedge e_n$$

with some scalar coefficients $c_{i,j}$. Consider the vector

$$\sigma=\sum_{1\leq i\leq n-1}c_{i,n}\,e_i\quad\in\;W.$$

If $\sigma=0$ then the problem reduces to $n-1$ dimensions. If $\sigma\neq0$ then it can be used in a basis for $W$:

$$e_1'=\sigma,\quad e_2',\quad\cdots,\quad e_{n-1}'$$

Thus the bivector can be rewritten as

$$\alpha=\sum_{1\leq i<j\leq n-1}c_{i,j}'\,e_i'\wedge e_j'+\sigma\wedge e_n$$ $$=\sum_{2\leq j\leq n-1}c_{1,j}'\,e_1'\wedge e_j'+\sum_{2\leq i<j\leq n-1}c_{i,j}'\,e_i'\wedge e_j'+\sigma\wedge e_n$$ $$=e_1'\wedge\left(\sum_{2\leq j\leq n-1}c_{1,j}'\,e_j'+e_n\right)+\sum_{2\leq i<j\leq n-1}c_{i,j}'\,e_i'\wedge e_j'.$$

Let $X$ be the span of $(e_2',\cdots,e_{n-1}')$. The sum on the right is a bivector in $\bigwedge^2X$, so by induction it can be written as a sum of products of independent vectors in $X$. Of course $e_1'$ is not in $X$, so the vectors are still independent when $e_1'$ is included. And the expression in parentheses is not in $W$ (because of the $+e_n$), while the other vectors are in $W$, so they're still independent when that is included. Thus $\alpha$ is a sum of products of independent vectors.


Now for the stronger claim, in the case of a real inner product space: Any bivector is a sum of products of orthogonal vectors, not just independent vectors.

I'll use some geometric algebra identities, such as $a\,\lrcorner\,(b\wedge c)=(a\cdot b)c-b(a\cdot c)$ where $a,b,c$ are vectors.

Given the bivector $\alpha$ -- no, let's call it $B$ -- the linear transformation $v\mapsto v\,\lrcorner\,B=-B\,\llcorner\,v$ is antisymmetric, because

$$u\cdot(v\,\lrcorner\,B)=u\,\lrcorner\,(v\,\lrcorner\,B)=(u\wedge v)\,\lrcorner\,B$$ $$=(-v\wedge u)\,\lrcorner\,B=-v\cdot(u\,\lrcorner\,B).$$

It follows that $(\,\lrcorner\,B)^2$ is symmetric. By the spectral theorem, it has an orthonormal eigenbasis. Let $u$ be a unit eigenvector, with eigenvalue $\lambda$, so $(u\,\lrcorner\,B)\,\lrcorner\,B=\lambda u$, and let $v=u\,\lrcorner\,B$. Then we have

$$v\cdot v=(u\,\lrcorner\,B)\,\lrcorner\,(-B\,\llcorner\,u)=-((u\,\lrcorner\,B)\,\lrcorner\,B)\,\llcorner\,u=-\lambda u\,\llcorner\,u=-\lambda,$$ $$u\cdot v=u\,\lrcorner\,(u\,\lrcorner\,B)=(u\wedge u)\,\lrcorner\,B=0,$$ $$u\,\lrcorner\,(u\wedge v)=v=u\,\lrcorner\,B,$$ $$v\,\lrcorner\,(u\wedge v)=-u(v\cdot v)=\lambda u=v\,\lrcorner\,B.$$

Thus the bivector $u\wedge v$ has the same effect as $B$, on the subspace spanned by $u$ and $v$. (This subspace is either 1D or 2D, depending on whether $\lambda=0$.) And it has null effect on the orthogonal complement; if $u\cdot w=v\cdot w=0$, then $w\,\lrcorner\,(u\wedge v)=0$. Therefore, if we write

$$B=u\wedge v+B',$$

then the bivector $B'$ acts nully on $u$ and $v$ and acts like $B$ on the orthogonal complement. It follows (by expanding $B'$ in terms of the orthonormal eigenbasis and considering $u\,\lrcorner\,B'=0$) that $B'$ is made of vectors in the orthogonal complement. And then we can induct on the dimension to prove the claim.

2
On

It suffices to show that $\bigwedge^2 V$ is a sub linear space of the linear space of multivectors generated with $V$. So is it a sub linear space?

It has a $0$. This is just the bivector of size 0.

It is closed under addition: the sum of any two 2-vectors is a 2-vector.

It is closed under scalar multiplication since the scalar multiplication of a 2-vector is a 2-vector.

Since it’s a linear subspace, it’s a linear space. Since it’s a linear space, it has a basis.

4
On

It will be easier to prove the dual statement, about a skew-symmetric bilinear form $\omega : V \otimes V \to F$ (which can be interpreted as an element of $\Lambda^2(V^{\ast})$ and vice versa). First, consider the subspace $W$ of vectors $v \in V$ such that $\omega(v, -) = 0$; this is called the kernel or the radical of $\omega$. $\omega$ naturally descends to a nondegenerate skew-symmetric bilinear form on the quotient $V/W$; from now on we will assume nondegeneracy WLOG, so that $\omega$ is a symplectic form.

Then the claim is that a symplectic form always admits a symplectic basis, meaning $V$ admits a basis $\{ e_1, \dots e_r, f_1, \dots f_r \}$ satisfying $\omega(e_i, f_j) = \delta_{ij}$, and in particular $\dim V = 2r$ is even.

This is true over every field and can be proven using a variation of Gram-Schmidt, as follows. Namely, we can pick $e_1$ to be any nonzero vector and $f_1$ to be any vector satisfying $\omega(e_1, f_1) = 1$, which exists by nondegeneracy. Then, inductively, if we've constructed $e_1, f_1, \dots e_k, f_k$, we can pick $e_{k+1}$ to be any nonzero vector orthogonal (with respect to $\omega$) to $V_k = \text{span}(e_1, f_1, \dots e_k, f_k)$ (if there isn't one then we stop), then pick $f_{k+1}$ by starting from a vector satisfying $\omega(e_{k+1}, f_{k+1}) = 1$ and then subtracting off multiples of $e_1, f_1, \dots e_k, f_k$ to make it orthogonal to $V_k$, as in the usual Gram-Schmidt argument.

Once this process terminates we've arrived at $V_r = \text{span}(e_1, f_1, \dots e_r, f_r)$ and its orthogonal complement is zero. Orthogonal complements with respect to any nondegenerate bilinear form have the property that $\dim W + \dim W^{\perp} = \dim V$ so if the orthogonal complement of $V_r$ is zero then $V_r = V$ and so $\{ e_i, f_i \}$ is a basis as desired.

In the not-necessarily-nondegenerate case we first pick an arbitrary basis of the kernel $W$, then lift a symplectic basis of $V/W$ to $V$.