Showing the Lie Algebras $\mathfrak{su}(2)$ and $\mathfrak{sl}(2,\mathbb{R})$ are not isomorphic.

2.8k Views Asked by At

I am working through the exercises in "Lie Groups, Lie Algebras, and Representations" - Hall and can't complete exercise 11 of chapter 3. My aim was to demonstrate that there does not exist a vector space isomorphism $A$ between the two spaces that also preserves the commutator.
$$[AX, AY] = A[X, Y]$$ To this end I computed the following commutation relations on bases for two spaces.

For the $\mathfrak{su}(2)$ basis matrices $e_1, e_2, e_3$ it holds that $$[e_1, e_2] = 2e_3 \,\,\,\,\,\, [e_1, e_3] = -2e_2 \,\,\,\,\,\, [e_2, e_3] = 2e_1$$

For the $\mathfrak{sl}(2, \mathbb{R})$ basis matrices $f_1, f_2, f_3$ it holds that $$[f_1, f_2] = 2f_2 \,\,\,\,\,\, [f_1, f_3] = -2f_3 \,\,\,\,\,\, [f_2, f_3] = f_1$$

It is clear that for the linear bijection $(e_1, e_2, e_3) \mapsto (f_1, f_2, f_3)$ would not preserve the relationships, nor would a permutation of the target matrices. However, I need to show no invertible matrix satisfies $$[AX, AY] = A[X, Y]$$ So from there I began to derive equations for the elements of $A$. They are ugly expressions in terms of the sub-determinants of the $A$ matrix, and given them I can't think of a way to conclude $A$ cannot exist. Is there an easier way to finish the proof than to derive the equations for $A$?

Note: I have looked up solutions for this problem and the only technique I see hinted at is to consider Killing forms (which have not yet been covered in this book).

6

There are 6 best solutions below

7
On BEST ANSWER

Your approach works without problems, if you write the condition $[Ax,Ay]=A[x,y]$ for all $x,y$ in terms of the $9$ coefficients of the matrix $A$. The polynomial equations in these $9$ unknowns over $\mathbb{R}$ quickly yield $\det(A)=0$, a contradiction.

Another elementary argument is the following. $\mathfrak{sl}(2,\mathbb{R})$ has a $2$-dimensional subalgebra, e.g., $\mathfrak{a}=\langle f_1,f_2\rangle$, but $\mathfrak{su}(2)$ has no $2$-dimensional subalgebra. Hence they cannot be isomorphic.

0
On

This is a Q&A style answer not meant to be the final answer to the question. It fleshes out one of the techniques suggested by Dietrich Burde for future readers.

Another elementary argument is the following. $\mathfrak{sl}(2,\mathbb{R})$ has a $2$-dimensional subalgebra, e.g., $\mathfrak{a}=\langle f_1,f_2\rangle$, but $\mathfrak{su}(2)$ has no $2$-dimensional subalgebra. Hence they cannot be isomorphic.


$\mathfrak{sl}(2, \mathbb{R})$ has a two dimensional subspace.

Consider matrices of the form $\alpha_1 f_1 + \alpha_2 f_2$. Clearly this is a subspace of $\mathfrak{sl}(2)$. We need to show the commutation operation is closed in this subspace: $$[\alpha_1 f_1 + \alpha_2 f_2, \beta_1 f_1 + \beta_2 f_2] = 2(\alpha_1\beta_2 - \alpha_2\beta_1)f_2$$

$\mathfrak{su}(2)$ does not have a two dimensional subspace.

Consider a two dimensional subspace with basis $g_1, g_2$. Then $$[\alpha_1 g_1 + \alpha_2 g_2, \beta_1 g_1 + \beta_2 g_2] = (\alpha_1\beta_2 - \alpha_2\beta_1)[g_1, g_2]$$ We must show that $g_1, g_2$ cannot be chosen such $[g_1, g_2]$ is in the span of $g_1, g_2$. To this end let $g_1 = \sum_i a_i e_i, g_2 = \sum b_i e_i$. It can be shown through direct calculation that $$[g_1, g_2] = \begin{vmatrix} 2 e_1 & a_1 & b_1 \\ 2 e_2 & a_2 & b_2 \\ 2 e_3 & a_3 & b_3 \notag \end{vmatrix}$$ In other words, the commutator of $g_1$ and $g_2$ is twice their cross product. Since the cross product is perpendicular to $g_1, g_2$ we are done.

7
On

This is a Q&A style answer not meant to be the final answer to the question. It fleshes out one of the techniques suggested by Mariano Suárez-Alvarez for future readers.

An isomorphism $f:\mathfrak{su}(2) \to \mathfrak{su}(3)$ has to map a diagonalizable element to a diagonalizable element.

It is isn't quite the same technique, but inspired by it. Instead I will use that if an isomorphism existed between $\mathfrak{su}(2)$ and $\mathfrak{sl}(2, \mathbb{R})$ then the induced homomorphism on their adjoint representations would have to preserve diagonalizability of matrices. This leads to a contradiction.

The following proposition is inspired by Lie algebra homomorphisms preserve Jordan form:

Suppose the Lie algebras $\mathfrak{g}, \mathfrak{h}$ are isomorphic. Denote the isomorphism as $\phi : \mathfrak{g} \to \mathfrak{h}$. Then for all diagonalizable $ad_X \in ad_\mathfrak{g}$, $\phi^*(ad_X) \in ad_\mathfrak{h}$ is diagonalizable (where $\phi^*$ is the induced homomorphism between the adjoint representations). In particular, if $\lambda_i$, $Y_i$ is an eigenvalue, eigenvector pair of $ad_X$, then $\lambda_i$, $\phi(Y_i)$ is an eigenvalue, eigenvalues pair of $ad_{\phi(X)}$.

Suppose that $ad_X$ is diagonalizable with eigenvalues $\lambda_i$ and eigenvectors $Y_i$. Then $$ad_X(Y_i) = \lambda_i Y_i$$ We want to show that $\phi(Y_i)$ is an eigenvector of $\phi^*(ad_X)$.

\begin{eqnarray*} \phi^*(ad_X)(\phi(Y_i)) &=& ad_{\phi(X)}(\phi(Y_i)) \\ &=& [\phi(X), \phi(Y_i)] \\ &=& \phi([X, Y_i]) \\ &=& \phi(ad_X(Y_i)) \\ &=& \lambda_i\phi(Y_i) \\ \end{eqnarray*}

Now using the commutivity relations stated in the problem we can calculate the adjoint representation of $\mathfrak{su}(2)$: $$ ad_{e_1} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & -2 \\ 0 & 2 & 0 \end{bmatrix} \,\,\,\,\, ad_{e_2} = \begin{bmatrix} 0 & 0 & 2 \\ 0 & 0 & 0 \\ -2 & 0 & 0 \end{bmatrix}\,\,\,\,\, ad_{e_3} = \begin{bmatrix} 0 & -2 & 0 \\ 2 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$

For $\mathfrak{sl}(2, \mathbb{R})$ we find: $$ ad_{f_1} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -2 \end{bmatrix} \,\,\,\,\, ad_{f_2} = \begin{bmatrix} 0 & 0 & 1 \\ -2 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\,\,\,\,\, ad_{f_3} = \begin{bmatrix} 0 & -1 & 0 \\ 0 & 0 & 0 \\ 2 & 0 & 0 \end{bmatrix}$$

Suppose $\phi$ is an isomorphism between $\mathfrak{sl}(2, \mathbb{R})$ and $\mathfrak(su)(2)$ and that $$\phi(f_1) = a_1 e_1 + a_2 e_2 + a_3 e_3$$ Now any linear combination of the matrices $ad_{e_i}$ is skew-symmetric which means that it has imaginary eigenvalues. On the other hand the matrix $ad_{f_1}$ has eigenvalues $0, -2, 2$. Consider the eigenvalue, eigenvector pair $-2, v$ of $f_1$. There is no way that $\phi(v)$ can be an eigenvector of $\phi(f_1)$ with eigenvalue $-2$, so we have a contradiction.

7
On

(Warning: See comments below this post - this answer is incorrect currently. A completed answer has been posted at math.stackexchange.com/a/4032652/96384)

This is a Q&A style answer not meant to be the final answer to the question. It completes the original technique for future readers. Thanks to Dietrich Burde for the motivation to continue with it.

As above, suppose $A$ is an isomorphism from $\mathfrak{su}(2) \to \mathfrak{sl}(2, \mathbb{R})$. Then $$[AX, AY] = A[X, Y]$$

Let $A_i$ denote the column vectors of $A$. Then $$Ae_i = \sum_j A_{ij} f_j$$ We use $[Ae_1, Ae_2] = A[e_1, e_2]$ to obtain $$2\begin{vmatrix} A_{11} & A_{21} \\ A_{12} & A_{22} \end{vmatrix}f_2 + 2\begin{vmatrix} A_{11} & A_{21} \\ A_{13} & A_{23} \end{vmatrix}(-f_3) + \begin{vmatrix} A_{12} & A_{22} \\ A_{13} & A_{23} \end{vmatrix}f_1 = 2 (A_{31}f_1 + A_{32}f_2 + A_{33}f_3)$$ Combining these three implied equations with the cofactor expansion of the determinant: $$\begin{vmatrix} A_{11} & A_{21} & A_{31} \\ A_{12} & A_{22} & A_{32} \\ A_{13} & A_{23} & A_{33} \end{vmatrix} = A_{31}\begin{vmatrix} A_{12} & A_{22} \\ A_{13} & A_{23} \end{vmatrix} - A_{32}\begin{vmatrix} A_{11} & A_{21} \\ A_{13} & A_{23} \end{vmatrix} + A_{33}\begin{vmatrix} A_{11} & A_{21} \\ A_{12} & A_{22} \end{vmatrix} $$ we obtain: $$\det(A) = 2 A_{31}^2 + A_{32}A_{33} + A_{33}A_{32}$$ Using the other two commutivity relations we get: $$\det(A) = 2 A_{11}^2 + 2 A_{12} A_{13}$$ $$\det(A) = 2 A_{21}^2 + 2 A_{22} A_{23}$$

0
On

Just to add another method to this nice collection of answers:

One sees easily that $\mathfrak{sl}(2, \mathbb R)$ has many ad-nilpotent elements besides $0$ (e.g. in your presentation $f_1$ and $f_2$), whereas one can show that $\mathfrak{su}_2$ has no ad-nilpotent element $\neq 0$.

To see this latter fact, one could compute the eigenvalues of $ad$ of a general element, or use the following shortcut: We know that $\mathfrak{su}_2$ has a standard representation on $\mathbb C^2$ which identifies it with the matrices

$$\pmatrix{ai&b+ci\\-b+ci&-ai}$$

($a,b,c \in \mathbb R$); now if some element were $ad$-nilpotent, then it must act nilpotently on any representation i.e. the above matrix would also need to be nilpotent, in particular have vanishing determinant. But its determinant is $a^2+b^2+c^2$ which is $\neq 0$ unless $a=b=c=0$.


The nice thing about this method is that it is a special case of a general fact I have found useful:

Among the semisimple real Lie algebras, the ones of the compact forms (like $\mathfrak{su}_n$) are precisely the ones which have no non-trivial nilpotent elements.

0
On

My proposed proof use the fact that Hermitian matrix can be diagonalized through unitary matrix to show that there's no isomorphism that preserve the Lie bracket:

$X,Y,Z \in sl(2,\Bbb R)$

where the usual brackets are:

$[X,Y] = Z\,,\,[X,Z] = -2X\,,\,[Y,Z] = 2Y$

The isomorphism $\phi:sl(2,\Bbb R) \to su(2)$ must preserve the brackets, let's use the last one

$\phi(2Y) = \phi([Y,Z]) = [\phi(Y),\phi(Z)]$

$A\equiv \phi(Y)\in su(2)\,,\,B\equiv\phi(Z)\in su(2)$

$[A,B] = 2A$

Since $A$ is Hermitian we have

$A = W^*\Lambda W\,,\,W^*W=I$, where $\Lambda$ is a diagonal matrix

Substituting in $[A,B] = 2A$ and easy calculations show that

$[\Lambda,H] = 2\Lambda$

where $H\equiv WBW^* \in su(2)\,,\,\Lambda \in su(2)$

So we have that:

$\Lambda = \left[ \begin{array}{cc} s&0\\ 0&-s \end{array} \right]\,,\,H = \left[ \begin{array}{cc} r&\gamma^*\\ \gamma&-r \end{array} \right]\,,\,r,s\in\Bbb R$ and $\gamma \in \Bbb C $

and substituting in $[\Lambda,H] = 2\Lambda$ it's easy to see that matrix equation is satisfied only if $s = 0$

which would imply $\phi(Y)=A=\Lambda = 0$ and then $Y=0$ (cause $\phi$ is an isomorphism), which is absurd, so the brackets can't be preserved.