What is the Grassmann algebra of $2\times 2$ complex matrices?

88 Views Asked by At

The traceless hermitian $2\times 2$ complex matrices form a real Euclidean space $\mathfrak E_3$ with dot product $a\cdot b:=\frac{1}{2}(ab+ba)/I$ where $I$ is the $2\times 2$ identity matrix. A wedge product can also be defined between the elements of $\mathfrak E_3$ as $a\wedge b:=\frac{1}{2}(ab-ba)$. The result of the wedge product is a traceless anti-hermitian $2\times 2$ complex matrix, i.e. $a\wedge b\in su(2)$. This suggests the existence of a Grassman-algebra $M_2(\mathbb C) =\mathbb RI\oplus\mathfrak E_3\oplus su(2)\oplus i\mathbb RI$. But what is the wedge product of an element of $\mathfrak E_3$ with an element of $su(2)$?

Edit

I suspect that if $a\in\mathfrak E_3$ and $b\in su(2)$ then $$a\wedge b=\frac{1}{2}(ab+ba)\tag 1$$

I base this conjecture on the fact that $su(2)=i\mathfrak E_3$ and on the conjecture that the Hodge star operator on $M_2(\mathbb C)$ regarded as Grassmann algebra is $$ \star x=\begin{cases} \phantom{-}ix & \text{ if } x\in \mathbb R\{I_2\}\cup\mathfrak E_3\\ x/i & \text{ if } x\in su(2)\cup i\mathbb R\{I_2\}\end{cases}\tag 2$$

for if this were true then according to the known Hodge star operation identity

$$\alpha \wedge ({\star} \beta) = (\alpha\cdot\beta)\,\omega \text{ for all } k\text{-vectors } \alpha,\beta\in {\textstyle\bigwedge}^{\!k}V,\tag 3$$ taking $\alpha=a$, $\beta=\star b$ and $\omega=iI$ we would get $$a\wedge b=a\wedge \star(\star b)=(a\cdot \star b)iI=(a\cdot(b/i))iI=\frac{i}{2}\left(ab/i+(b/i)a\right)=\frac{1}{2}(ab+ba)$$

Equation (2) seems plausible for me because for any $x,y\in\mathfrak E_3$, $\frac{1}{i}x\wedge y=\frac{1}{2i}(xy-yx)$ is the vectorial (cross) product of $x$ and $y$ and we know that in the 3-dimensional euclidean space, $x\times y=\star(x\wedge y)$ and the Hodge star in $3$-dimensions is involutive.

2

There are 2 best solutions below

2
On

It may be useful to note that if $V$ is a two-dimensional Hermitian vector space with Hermitian inner product $\langle-,-\rangle$, say conjugate linear in the first factor and linear in the second, then setting $\mathcal E(V) = \mathrm{Hom}(V,V)$, the map $\alpha\mapsto \alpha^*$ given by $$ \langle \alpha(v),w\rangle = \langle v,\alpha^*(w)\rangle, \quad \forall v,w \in V, $$ equips $\mathcal E = \mathrm{End}(V)$ a complex conjugation, and $\langle \alpha,\beta\rangle := \mathrm{tr}(\alpha^*\circ \beta)$ gives $\mathcal E(V)$ the structure of an Hermitian inner product space.

As an $\mathbb R$-vector space, $\mathcal E$ decomposes into $\mathfrak u(V)\oplus \mathfrak a(V)$, where $\mathfrak u(V) = \{X \in \mathcal E(V): X=-X^*\}$ and $\mathfrak a(V) = \{X \in \mathcal E(V): X=X^*\}$. Moreover, $\langle \alpha, \beta \rangle_{\mathbb R}:= \mathrm{Re}(\langle\alpha,\beta\rangle)$ is a real inner product on the $8$-dimensional $\mathbb R$-vector space $\mathcal E(V)$, which, when restricted to $\mathfrak a(V)$, the Hermitian endomorphisms of $V$, coincides with the one in the OP's post.

Now let $\mathfrak{gl}(V)$, be the Lie algebra associated to associative algebra $\mathcal E(V)$, that is, $\mathcal E(V)$ equipped with the commutator bracket. The Lie algebra $\mathfrak{gl}(V)$ as an $\mathbb R$-Lie algebra acquires an automorphism $X\mapsto X^{\circ}$ where $X^{\circ} = -X^{*}$, and the decomposition $\mathfrak{gl}(V) = \mathfrak u(V)\oplus \mathfrak a(V)$ corresponds to the decomposition of $\mathfrak{gl}(V)$ into the eigenspaces of this automorphism, with $\mathfrak u(V)$ being the fixed subalgebra, and $\mathfrak a(V)$ its $-1$-eigenspace, so that $\mathfrak a(V)$ is, in particular, an $\mathfrak u(V)$-subrepresentation of $\mathfrak{gl}(V)$. Using $\mathrm{tr}$ we may decompose $\mathfrak{gl}(V)$ into a slighly finer decomposition: $$ \mathfrak{u}(V) = i\mathbb R.I\oplus \mathfrak{su}(V), \quad \mathfrak a(V) = \mathbb R.I \oplus \mathfrak{sa}(V), $$ where $I$ denotes the identity linear map and $\mathfrak{su}(V) = \{X \in \mathfrak{u}(V): \mathrm{tr}(X)=0\}$, $\mathfrak{sa}(V)=\{X \in \mathfrak{a}(V): \mathrm{tr}(X)=0\}$. These decomposition are in fact an orthogonal direct sum decomposition of $\mathfrak{su}(V)$-subrepresentations (where here "orthogonal" means with respect to $\langle -,-\rangle_{\mathbb R}$). Moreover, multiplication by $i$ gives an $\mathfrak{su}_2$-isomorphism from $\mathbb R.I$ to $i\mathbb R.I$ and from $\mathfrak{su}(V)$ to $\mathfrak{sa}(V)$.

Thus I think the "Grassmann product" that the OP is looking for on $\mathcal E(V)$ is just $a\wedge b = \frac{1}{2}[a,b]$, $\forall a,b \in \mathcal E(V)$ where $[a,b]=ab-ba$ is the commutator bracket on the endomorphisms of $V$, i.e. the standard Lie bracket associated to $\mathfrak{gl}(V)$. In particular, if $\alpha \in \mathfrak{su}(V)$ and $\beta \in \mathfrak{sa}(V)$, then $(\alpha\beta)^* = \beta^*\alpha^* = -\beta\alpha$, so that $\alpha\wedge\beta = \frac{1}{2}(\alpha\beta +(\alpha\beta)^*)= \pi_{\mathfrak a}(\alpha\beta)$, where $\pi_{\mathfrak a}$ denotes the orthogonal projection from $\mathfrak{gl}(V)$ to $\mathfrak{a}(V)$ (with respect to the real inner product $\langle -,-\rangle_{\mathbb R}$).

0
On

If $a\in\mathfrak E_3$ and $b\in su(2)$ then $$a\wedge b=\frac{1}{2}(ab+ba)\tag 1$$

Proof.

Since $M_2(\mathbb C)$ is a Clifford algebra with the matrix multiplication as Clifford product, the wedge product of the Pauli matrices $\sigma_1,\sigma_2,\sigma_3\in\mathfrak E_3$ is by its definition $$\sigma_1\wedge \sigma_2\wedge \sigma_3=\frac{1}{6}(\sigma_1\sigma_2\sigma_3+\sigma_2\sigma_3\sigma_1+\sigma_3\sigma_1\sigma_2-\sigma_2\sigma_1\sigma_3-\sigma_3\sigma_2\sigma_1-\sigma_1\sigma_3\sigma_2)=iI$$ since $$\sigma_1\sigma_2=-\sigma_2\sigma_1=i\sigma_3,\\ \sigma_2\sigma_3=-\sigma_3\sigma_2=i\sigma_1,\\ \sigma_3\sigma_1=-\sigma_1\sigma_3=i\sigma_2\tag 2$$

and $$\sigma_1^2=\sigma_2^2=\sigma_3^2=I\tag 3.$$

By the alternating property of the wedge product,

$$\sigma_j\wedge\sigma_k\wedge\sigma_l=\epsilon_{j,k,l}\ iI\tag 4$$

Now, let $$a=\sum_j a_j\sigma_j \in\mathfrak E_3\tag 5$$ and $$b=\sum_{k,l} b_{k,l}\sigma_k\wedge\sigma_l = \sum_{k,l}\frac{1}{2}b_{k,l}(\sigma_k\sigma_l-\sigma_l\sigma_k)=\sum_j ib_j\sigma_j\in su(2)\tag 6$$ where $$b_1:=b_{2,3}-b_{3,1}\\ b_2:=b_{3,1}-b_{1,3}\\ b_3:=b_{1,2}-b_{2,1}\tag 7$$, that is,

$$b_j=\sum_{k,l}\epsilon_{j,k,l}b_{k,l}\tag 8$$

Then, the LHS of $(1)$ is $$a\wedge b=\sum_{j,k,l}a_jb_{k,l}\sigma_j\wedge\sigma_k\wedge\sigma_l \\= \sum_{j,k,l}a_jb_{k,l}\epsilon_{j,k,l}\ iI \\= \left(\sum_j a_j\sum_{j,k,l}b_{k,l}\epsilon_{j,k,l}\right)iI \\= \left(\sum_j a_jb_j\right)iI,\tag 9 $$

while the RHS of $(1)$ is $$\frac{1}{2}(ab+ba)\\ =\frac{1}{2}\left(\sum_j a_j\sigma_j\right)\left(\sum_k ib_k\sigma_k\right)+\frac{1}{2}\left(\sum_j ib_j\sigma_j\right)\left(\sum_k a_k\sigma_k\right)\\ =\frac{i}{2}\sum_{j,k} a_jb_k(\sigma_j\sigma_k+\sigma_k\sigma_j)\\ =i\sum_{j} a_jb_j(\sigma_j)^2\\ =\left(\sum_{j} a_jb_j\right)iI\tag {10}$$ q.e.d.