Properties of different matrix representations of clifford algebras

604 Views Asked by At

I am looking for some theorems about matrix representations of Clifford algebras.
Let $a \in G_{p,q,r}$, where $p$ elements square to $1$, $q$ to $-1$, and $r$ to $0$, that is $a=\sum_{i=1}^{2^{p+q+r}}a^{(i)}g_i$, where $g_i$ denotes $i$'th basis of clifford algebra $G_{p,q,r}$ arising from orthonormal basis. Then we can represent $a$ in form of matrix of real numbers in at least two different ways as following:

Let operator $V()$ denote vector form of a clifford number, that is if $x=\sum_{i=1}^{2^{p+q+r}}x^{(i)} g_i$, then $V(x)=[x^{(1)},x^{(2)},x^{(3)},...,x^{(2^{p+q+r})}]^{T}$. Take a geometric product of two Clifford numbers $a,b \in G_{p,q,r}$, and calculate the necessary matrix of real numbers $C$ such that $CV(b)=V(ab)$. The matrix $C$ is the first representation. The second representation can be obtained by calculating matrix $D$ such that $DV(b)=V(ba)$. In other words $C$ and $D$ are matrices for the linear transformations induced by left and right (respectively) multiplication by $a$.

For sake of generalizing below observations to algebras where $r$ is nonzero, from now on we assume that if $g_i^{2}=0$ then associated $a^{(i)}$ is equal to zero. I am looking for proofs of the following observations about these representations:

(1) det($C$)=det($D$), and eigenvalues of $C$ equal to eigenvalues of $D$.
(2) Each unique nonzero element of matrices $C$ and $D$ is either always symmetric, always antisymmetric, or always has a zero symmetry. Moreover, $a^{(i)}$ is symmetric if and only if $g_i^{2}=1$, antisymmetric iff $g_i^{2}=-1$, and has zero symmetry iff $g_i^{2}=0$.

I should explain what I mean by (2). For example take algebra $G_{1,1,0}$. Then we have the following matrices $C$ and $D$ for it: \begin{align} C=\left[ \begin{matrix} a^{(1)}& a^{(2)}& -a^{(3)}& a^{(4)}\\ a^{(2)}& a^{(1)}& -a^{(4)}& a^{(3)}\\ a^{(3)}& -a^{(4)}& a^{(1)}& a^{(2)}\\ a^{(4)}& -a^{(3)}& a^{(2)}& a^{(1)} \end{matrix}\right] \end{align}

\begin{align} D=\left[ \begin{matrix} a^{(1)}& a^{(2)}& -a^{(3)}& a^{(4)}\\ a^{(2)}& a^{(1)}& a^{(4)}& -a^{(3)}\\ a^{(3)}& a^{(4)}& a^{(1)}& -a^{(2)}\\ a^{(4)}& a^{(3)}& -a^{(2)}& a^{(1)} \end{matrix}\right] \end{align}

It can be seen that:
if $C_{ij}=\pm a^{(1)}$, then $C_{ji}=\pm C_{ij}$. $\qquad$ ($a^{(1)}$ is symmetric, $e_0^2=1$)
if $C_{ij}=\pm a^{(2)}$, then $C_{ji}\pm C_{ij}$ $\qquad$ ($a^{(2)}$ is symmetric, $e_1^2=1$)
if $C_{ij}=\pm a^{(3)}$, then $C_{ji}=\mp C_{ij}$ $\qquad$ ($a^{(3)}$ is antisymmetric,$e_2^2=-1$)
if $C_{ij}=\pm a^{(4)}$, then $C_{ji}=\pm C_{ij}$ $\qquad$ ($a^{(4)}$ is symmetric,$e_{12}^2=1$)

The same is true for $D$ also.
Here is another example, this time for $G_{0,1,1}$: \begin{align} C=\left[ \begin{matrix} a^{(1)}& -a^{(2)}& 0& 0\\ a^{(2)}& a^{(1)}& 0& 0\\ a^{(3)}& a^{(4)}& a^{(1)}& -a^{(2)}\\ a^{(4)}& -a^{(3)}& a^{(2)}& a^{(1)}\\ \end{matrix}\right] \end{align} \begin{align} D=\left[ \begin{matrix} a^{(1)}& -a^{(2)}& 0& 0\\ a^{(2)}& a^{(1)}& 0& 0\\ a^{(3)}& -a^{(4)}& a^{(1)}& a^{(2)}\\ a^{(4)}& a^{(3)}& -a^{(2)}& a^{(1)}\\ \end{matrix}\right] \end{align}

if $C_{ij}=\pm a^{(1)}$, then $C_{ji}=\pm C_{ij}$. $\qquad$ ($a^{(1)}$ is symmetric, $e_0^2=1$)
if $C_{ij}=\pm a^{(2)}$, then $C_{ji}\mp C_{ij}$ $\qquad$ ($a^{(2)}$ is antisymmetric, $e_1^2=-1$)
if $C_{ij}=\pm a^{(3)}$, then $C_{ji}=0$ $\qquad$ ($a^{(3)}$ is zero-symmetric,$e_2^2=0$)
if $C_{ij}=\pm a^{(4)}$, then $C_{ji}=0$ $\qquad$ ($a^{(4)}$ is zero-symmetric,$e_{12}^2=0$)

The above observations (1) and (2) are just my ideas after observing many of these $C$ and $D$ matrices for different algebras, so they may be false. Ideally i would like to prove or disprove each of them.

Edit: deleted observations which I have proofs of now, namely that $CD=DC$ and $C^{T}D=DC^{T}$. These can be proved by using the associativity of Clifford algebra. Though my proof of $C^{T}D=DC^{T}$ still relies on some parts of observation (2) being true.

1

There are 1 best solutions below

12
On BEST ANSWER

Let's take the exmaple of $G_{1,1,0}$ as you did and let $e_1, e_2$ be an orthonormal basis for $V$ with the first squaring to 1 and the second squaring to -1.

Let's look at $C$ and $D$ corresponding to $e_1$:

$$C=\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&1\\0&0&1&0\\\end{bmatrix} D=\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&-1\\0&0&-1&0\\\end{bmatrix}$$

Similarly for $e_2$:

$$C=\begin{bmatrix}0&0&-1&0\\0&0&0&1\\1&0&0&0\\0&-1&0&0\\\end{bmatrix}D=\begin{bmatrix}0&0&1&0\\0&0&0&1\\-1&0&0&0\\0&-1&0&0\\\end{bmatrix}$$

and for $e_1e_2$:

$$C=\begin{bmatrix}0&0&0&1\\0&0&-1&0\\0&-1&0&0\\1&0&0&0\\\end{bmatrix}D=\begin{bmatrix}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\\\end{bmatrix}$$

And of course for the identity, both matrices are the identity matrices.

In retrospect, it's easy to see now that because the product of two basis elements is $\pm$ another basis element, or else it is zero, all of these sorts of matrices will be partly "permutation," and sometimes they will have zero columns/rows where the products are zero. Finally, they "don't overlap". At any rate, that explains their symmetry antisymmetry properties that you were observing.

Since every other matrix you are interested in is a linear combination of these, I think this, roughly speaking, explains the symmetries you are seeing.