How to derive explicit matrix representations for the non-diagonal generators of a simple Lie algebra?

36 Views Asked by At

Given a semi simple Lie algebra's Dynkin diagram/Cartan matrix one can easily find out the weights of any particular representation. The weights of the defining representation are enough to give the mutually commuting i.e. Cartan generators in Cartan-Weyl basis directly. But how do we derive explicit matrix representations of the non-Cartan generators (laddering operators) i.e. the ones which don't commute? Ofcourse, if I know the matrix representation from the beginning, then we can just take linear combinations to move to the Cartan-Weyl basis: but that's not what I am doing here. Is it possible to derive for example the non-diagonal Pauli matrices for $\mathfrak{su}(2)$ from the root/weight info of $A_1$ ?(I think "yes", because the Cartan matrix encodes full info about the Lie algebra). How do we do this? If only I could find the action of these generators $E_\alpha$ on weight vectors, then I could get the matrix elements easily. I tried using the commutation relations between the Cartan and non-Cartan generators but got to nowhere. I will be grateful to anyone who does even the trivial example of $A_1$ for me, which I couldn't find in books; if not provide some detail into how I might do it for any simple Lie algebra.

1

There are 1 best solutions below

0
On

All you are looking for, in effect, is a Cartan-Weyl basis for the Lie algebra but this is quite easy to work out directly, at least in the defining representations (what physicists call the fundamental representation) of the classical Lie algebras, and from there you can often extend to other possible representations as well.

Note one important thing though. You are using "the" a lot when everything here is a choice. We choose what basis we want for the representation and then we choose a scale for each $E_\alpha$. Probably we ensure $[E_\alpha,E_{-\alpha}] = H_\alpha$ and maybe that $E_{\alpha+\beta} = [E_\alpha,E_{\beta}]$ where possible (for all simple roots perhaps).

Turning a weight system directly into a matrix representation is more of a pain though. Using $E_\alpha (V_\lambda) \subset V_{\lambda + \alpha}$ we get which entries in the matrix must be $0$, and using $[E_\alpha,E_{-\alpha}] = H_\alpha$ we can work out what the remaining entries have to be (again this is all depending on some choice of scaling somewhere).

But for the classical Lie algebras this is so much simpler to just calculate directly. For example, for $\mathfrak{sl}_n$ in its usual representation on $\mathbb{C}^n$ the $E_\alpha$ can be chosen simply to be the matrices with a single $1$ in one slot and $0$'s everywhere else. You also have nice relations like $E_{-\alpha} = E_\alpha^T$. If you want to convert that to $\mathfrak{su}_n$ you simply take your basis to instead comprise all the $E_\alpha - E_{-\alpha}$ and $i(E_\alpha + E_{-\alpha})$. This gives you the Pauli matrices for example although you have to multiply by $i$ because Physicists like $\mathfrak{su}_n$ to be the Hermitian matrices rather than the more natural anti-Hermitian ones.

Note it is also common in physics to ensure that the matrices form an orthonormal basis as in e.g. the Gell-Mann matrices but this construction also works for that as the elements I've given are already orthogonal under the inner product induced by the Killing form (you just have to make sure the diagonal elements are chosen to be orthogonal to each other as well).