Let $V$ be an $n$-dimensional vector space over some field $\mathbb F$. I'm interested in the following result:
For every bivector $\alpha\in\bigwedge^2 V$, there exists a linearly independent set $S=\{\sigma_1,\ldots,\sigma_{2r}\}$ such that $$\alpha = \sigma_1\wedge\sigma_2 + \sigma_3\wedge\sigma_4 + \ldots + \sigma_{2r - 1}\wedge\sigma_{2r}.$$
Note that this is a sum of $r\le\frac{n}{2}$ terms, not $\dim(\bigwedge^2 V)=\binom{n}{2}$.
When $\mathbb F = \mathbb R$, I've seen this result mentioned without proof throughout many sources, even with the added condition that the vectors in $S$ are orthogonal. However, I haven't been able to find a proof, nor any mention of what happens over other fields. To make matters worse, most sources seem to restrict their attention to $V=\mathbb R^3$, where this is a corollary of the simpler statement that over an $n$-dimensional vector space, an $(n-1)$-form is decomposable.
Question
- How is this result proven in the general case?
- Does it generalize to arbitrary fields? If not, what's a counterexample?
Thoughts
This result reminds me of the fact that for $T:\mathbb R^n\to\mathbb R^n$ an orthogonal transformation, the space $\mathbb R^n$ can be decomposed into one or two-dimensional $T$-invariant subspaces. So this doesn't make me hopeful for a generalization to other fields. Even using this, I can't find a way to link this result to the bivector theorem.
This is obviously true in $2$ dimensions. For induction, suppose it's true in $n-1$ dimensions; we'll see that it's also true in $n$ dimensions.
Let $(e_1,\cdots,e_n)$ be a basis for $V$, and let $W$ be the span of $(e_1,\cdots,e_{n-1})$, and write your bivector as
$$\alpha=\sum_{1\leq i<j\leq n}c_{i,j}\,e_i\wedge e_j$$ $$=\sum_{1\leq i<j\leq n-1}c_{i,j}\,e_i\wedge e_j+\sum_{1\leq i\leq n-1}c_{i,n}\,e_i\wedge e_n$$
with some scalar coefficients $c_{i,j}$. Consider the vector
$$\sigma=\sum_{1\leq i\leq n-1}c_{i,n}\,e_i\quad\in\;W.$$
If $\sigma=0$ then the problem reduces to $n-1$ dimensions. If $\sigma\neq0$ then it can be used in a basis for $W$:
$$e_1'=\sigma,\quad e_2',\quad\cdots,\quad e_{n-1}'$$
Thus the bivector can be rewritten as
$$\alpha=\sum_{1\leq i<j\leq n-1}c_{i,j}'\,e_i'\wedge e_j'+\sigma\wedge e_n$$ $$=\sum_{2\leq j\leq n-1}c_{1,j}'\,e_1'\wedge e_j'+\sum_{2\leq i<j\leq n-1}c_{i,j}'\,e_i'\wedge e_j'+\sigma\wedge e_n$$ $$=e_1'\wedge\left(\sum_{2\leq j\leq n-1}c_{1,j}'\,e_j'+e_n\right)+\sum_{2\leq i<j\leq n-1}c_{i,j}'\,e_i'\wedge e_j'.$$
Let $X$ be the span of $(e_2',\cdots,e_{n-1}')$. The sum on the right is a bivector in $\bigwedge^2X$, so by induction it can be written as a sum of products of independent vectors in $X$. Of course $e_1'$ is not in $X$, so the vectors are still independent when $e_1'$ is included. And the expression in parentheses is not in $W$ (because of the $+e_n$), while the other vectors are in $W$, so they're still independent when that is included. Thus $\alpha$ is a sum of products of independent vectors.
Now for the stronger claim, in the case of a real inner product space: Any bivector is a sum of products of orthogonal vectors, not just independent vectors.
I'll use some geometric algebra identities, such as $a\,\lrcorner\,(b\wedge c)=(a\cdot b)c-b(a\cdot c)$ where $a,b,c$ are vectors.
Given the bivector $\alpha$ -- no, let's call it $B$ -- the linear transformation $v\mapsto v\,\lrcorner\,B=-B\,\llcorner\,v$ is antisymmetric, because
$$u\cdot(v\,\lrcorner\,B)=u\,\lrcorner\,(v\,\lrcorner\,B)=(u\wedge v)\,\lrcorner\,B$$ $$=(-v\wedge u)\,\lrcorner\,B=-v\cdot(u\,\lrcorner\,B).$$
It follows that $(\,\lrcorner\,B)^2$ is symmetric. By the spectral theorem, it has an orthonormal eigenbasis. Let $u$ be a unit eigenvector, with eigenvalue $\lambda$, so $(u\,\lrcorner\,B)\,\lrcorner\,B=\lambda u$, and let $v=u\,\lrcorner\,B$. Then we have
$$v\cdot v=(u\,\lrcorner\,B)\,\lrcorner\,(-B\,\llcorner\,u)=-((u\,\lrcorner\,B)\,\lrcorner\,B)\,\llcorner\,u=-\lambda u\,\llcorner\,u=-\lambda,$$ $$u\cdot v=u\,\lrcorner\,(u\,\lrcorner\,B)=(u\wedge u)\,\lrcorner\,B=0,$$ $$u\,\lrcorner\,(u\wedge v)=v=u\,\lrcorner\,B,$$ $$v\,\lrcorner\,(u\wedge v)=-u(v\cdot v)=\lambda u=v\,\lrcorner\,B.$$
Thus the bivector $u\wedge v$ has the same effect as $B$, on the subspace spanned by $u$ and $v$. (This subspace is either 1D or 2D, depending on whether $\lambda=0$.) And it has null effect on the orthogonal complement; if $u\cdot w=v\cdot w=0$, then $w\,\lrcorner\,(u\wedge v)=0$. Therefore, if we write
$$B=u\wedge v+B',$$
then the bivector $B'$ acts nully on $u$ and $v$ and acts like $B$ on the orthogonal complement. It follows (by expanding $B'$ in terms of the orthonormal eigenbasis and considering $u\,\lrcorner\,B'=0$) that $B'$ is made of vectors in the orthogonal complement. And then we can induct on the dimension to prove the claim.