Bivector into orthogonal components

327 Views Asked by At

Suppose I have a metric $g$ and a bivector $ F $ on a four-dimensional vector space. It seems I can always decompose $ F $ into four mutually orthogonal vectors $a,b,c,d$ $$ F = a\wedge b + c\wedge d $$

Here is a sketch of a proof using tensor notation where the metric lowers indices: $$F^\mu_{\,\,\,\,\nu}\equiv F^{\mu\rho}g_{\rho\nu}.$$ We pick a unit eigenvector $u$ of $F^2$, $$ F^\mu_{\,\,\,\,\rho}F^\rho_{\,\,\,\,\nu}u^\nu = \lambda u^\mu,$$ and define the vector $v$, $$v^\mu \equiv F^\mu_{\,\,\,\,\nu}u^\nu,$$ then I can decompose $F$ as $$F^\mu_{\,\,\,\,\nu} = v^\mu u_\nu - u^\mu v_\nu + G^\mu_{\,\,\,\,\nu}.$$ Unless I'm being stupid this is the required decomposition into orthogonal simple bivectors (once we raise the index), but this is surprising to me since $g$ is arbitrary.

Can someone explain my mistake or explain why it's obvious that this is true and I shouldn't be surprised? I am familiar with the idea that any antisymmetric tensor can be put into block diagonal form using orthogonal matrices. If the metric were the Euclidean metric, this would give the required decomposition. Is there a generalization of this idea for arbitrary metric?