Simple bivectors in four dimensions

499 Views Asked by At

I am trying to characterize simple bivectors in four dimensions, i.e. elements $B \in \bigwedge^2 \mathbb{R}^4$ such that $B = a \wedge b$ for two vectors $a, b \in \mathbb{R}^4$. In the book Clifford Algebras and spinors by Lounesto Pertti, I found the following: Lounesto, p. 87 I can see why the square of any simple bivector is real since we have the identity $(a \wedge b)^2 = -|a \wedge b |^2$. However, I cannot prove the second statement, i.e. If the square of a bivector is real, then it is simple.

Writing $e_{ij} = e_i \wedge e_j$ and choosing $\{e_{14}, e_{24}, e_{34}, e_{23}, e_{31}, e_{12}\}$ as a basis of $\bigwedge^2\mathbb{R}^4$ (I have specific reasons to choose this slightly atypical basis), I find by direct computation that $B^2 = -|B|^2 + 2(B_{12}B_{34} + B_{14} B_{23} + B_{31}B_{24})e_{1234}$, where $e_{1234} = e_1 e_2 e_3 e_4$ denotes the pseudo scalar in the Clifford algebra of $\mathbb{R}^4$. Yet, I don't manage to conclude from that.

As a more general approach, I thought of using the relationship between simple rotations of $\mathbb{R}^4$ and simple bivectors. In fact, the simple bivectors form a double cover of the simple rotations, so the geometry of the simple bivectors should be something like the choice of a plane in $\mathbb{R}^4$ and the choice of an angle $\theta \in [- \pi, +\pi]$, i.e. $$ \text{simple bivectors } \simeq Gr(2, 4) \times [- \pi, +\pi]. $$ Is the latter more or less correct? And how can this help me to characterize more precisely simple bivectors in $\mathbb{R}^4$?

3

There are 3 best solutions below

3
On

I think that the statement "If ... simple." is actually a definition of a simple bivector. Consider $n$ even. What appears to be true is that any bivector in a space of dimension $n$ may be expressed as a sum of $\frac{n}{2}$ commuting bivectors. The question then is are the commuting bivectors each simple. The answer is yes for Euclidean and Minkowski spacees (Riesz or Hestenes and Sobcyzk). The answer is maybe for other spaces where $n=p+q$, $q$ and $p$ are both $2$ or more. The result depends on the eigenvalues of the function B.v, $v$ a vector, $B$ a bivector. If the eigenvalues are real or imaginary, the corresponding bivectors are simple. But if complex, the are two commuting orthogonal non-simple bivectors whose exponentials corresponds to isocline rotations. When $n$ is odd there are $\frac{n-1}{2}$ commuting bivectors. $R(2,2)$ is the lowest dimension where complex eigenvalues may exist.

1
On

The limitations of my answer may be clearer with this discussion. There are 3 steps.

  1. Given the bivector B, form a square matrix with columns B.ei, where ei is the ith basis element. Call this matrix B. It has the shape AK, where A is skew-symmetric, and K is diagonal with the first p elements +1 and the last q elements -1.
  2. Form the spectral decomposition of the matrix B. I assume the eigenvalues are distinct, and n is even. With eigenvalues fi, eigenvectors ui and adjoint eigenvectors vi, this is B=sum (i=1..n) fi uiviT/viTui, where T indicates transpose. This may be rewritten B=sum(i=1..n) fiMi, where Mi is a square rank-1 matrix. For the matrix B of the shape AK, the characteristic equation is real and is missing odd-order coefficients, and so the eigenvalues, if real are in pairs +/-a, if imaginary in pairs +/-ib, and if complex in quads +/-c+/-id.
  3. The next step is to combine the n terms to n/2 terms as follows. By example. In R(5,3) one may have all types. Then B=aM1-aM2+ibM3-ibM4+(c+id)M5+(c-id)M6+(-c+id)M7+(-c-id)M8. This is rewritten B=a([M1-M2]+b[iM3-iM4]+c[M5+M6-M7-M8]+d[iM5-iM6+iM7-iM8]. Then one observes and can prove that the 4 matrices in brackets are real, and have the shape AK. Converting back to bivectors reverses step 1, and yields the result quoted. If the eigenvalues are not distinct things get messy. By the way, Matlab will generally not order the eigenvectors and eigenvalues as you may wish.
0
On

$ \newcommand\form[1]{\langle#1\rangle} $It is much, much easier than the other answers are making it out to be. Note that $B^2$ must be an even element; so $$ B^2 = \form{B^2}_0 + \form{B^2}_2 + \form{B^2}_4 = B\cdot B + B\times B + B\wedge B = B\cdot B + B\wedge B. $$ where $\form{\cdot}_r$ is the grade $r$ projection and $X\times Y = \tfrac12(XY - YX)$ is the commutator product. (It is true for any $k$-vector $X$ that $\form{BX}_k = B\times X$.

If $B$ is simple, then clearly we're left with $B^2 = B\cdot B$, a scalar.

If $B^2$ is scalar, then $B\wedge B = 0$. We may write $B$ as a sum of simple bivectors $$ B = B_1 + B_2 $$ Hence $$ 0 = B\wedge B = 2B_1\wedge B_2 $$ so the planes represented by $B_1$ and $B_2$ must intersect in a line represented by a vector $b$; so $B$ lives in a 3D subspace and $B$ is factorable. More concretely, there are $b_1, b_2$ such that $B_1 = b\wedge b_1$ and $B_2 = b\wedge b_2$, so $$ B = b\wedge b_1 + b\wedge b_2 = b\wedge(b_1 + b_2). $$