Why can we write the weights of a representation in terms of the simple roots?

270 Views Asked by At

I'm currently trying to get my head around the fact that we can write the weights of any representation in terms of the simple roots of the algebra.

Is there any, not too-technical, explanation?

I understand that roots are the weights of the adjoint representation and I know the defintion of simple roots.

1

There are 1 best solutions below

0
On

I am not entirely sure what material you'd like to assume and what should be proved. So here I'll start with some provisional definitions (to avoid thinking too much, always working over an algebraically closed field of characteristic $0$):

The radical of a Lie algebra is its largest solvable ideal. A Lie algebra is semi-simple if its radical is $0$. A Cartan subalgebra of a Lie algebra is a self-normalizing nilpotent subalgebra.

Now for the fact upon which the answer to your question ultimately depends: each Cartan subalgebra of a semisimple Lie algebra is abelian (for any Lie algebra, the Cartan subalgebras are conjugate by the group of inner automorphisms, so if one is abelian, they all are). Given this, a semisimple Lie algebra $L$ decomposes as

$$L=\mathfrak{h} \oplus \bigoplus_{\alpha \in R} L_\alpha,$$ where $R \subseteq \mathfrak{h}^*$ is the set of roots, and $$L_\alpha=\{x \in L \ | \ [h,x]=\alpha(h) x \}$$ defines the subspace $L_\alpha$. The set of roots spans $\mathfrak{h}^*$, since otherwise the center of $L$ is non-trivial, contradicting semisimplicity. It follows that any element of $\mathfrak{h}^*$ can be written as a linear combination of roots.

The Killing form of $L$ is defined by $$(x,y)=\mathrm{tr}(\mathrm{ad}(x) \mathrm{ad}(y)).$$ Now assume $L$ semi-simple. Then the Killing form is non-degenerate upon restriction to $\mathfrak{h}$, and hence allows us to identify $\mathfrak{h} \cong \mathfrak{h}^*$. Moreover, with respect to this identification, the Killing form is positive definite on the real span of the roots.

Now for simple roots: one quick way to define them is to choose a vector $v$ in the real span of the roots and not orthogonal to any root, define the set of positive roots to be the set of $\alpha$ with $(\alpha,v) > 0$, and then take the set of simple roots to be the positive roots that cannot be written as a sum of two positive roots. It is then clear that every positive root is a sum of simple roots, and hence the simple roots span $\mathfrak{h}^*$ as well.

It is a somewhat more subtle fact that the weights of any finite dimensional representation are integer combinations of the fundamental weights.