Good Basis for Lie Group Representations?

168 Views Asked by At

In $SU(2)$, all of the weight multiplicities of irreps are zero or one, so I can define a basis where each vector (defined modulo rescaling) is labeled uniquely by its weight. However, in $SU(3)$ this already doesn't work, as the zero weight in the adjoint representation has multiplicity 2. In this case, how can you distinguish them?

One thought I had that might work for finite dimensional Lie groups is to choose a highest weight vector and then use the lowering operators to produce from this vector a basis for the whole representation. Looking at the Kostant formula for the weight multiplicity, however, this is likely to produce an overcomplete basis. Is there a way perhaps to order the lowering operators that this doesn't happen?

I would also like to ask this question for finite group representations, but maybe this is enough for now.

1

There are 1 best solutions below

2
On BEST ANSWER

The general context is this: you have a vector space $V$ and a commuting set of diagonalizable operators acting on $V$. If the common eigenspaces are one-dimensional, then any choice of eigenvectors gives you a basis, and different choices differ only by a diagonal change of basis matrix. But if some of the common eigenspaces have larger dimension, how should you find "natural" basis elements?

The general philosophy is that one should enlarge the set of commuting operators, in order to find a set of commuting operators large enough so that the eigenspaces are all one dimensional. In Lie theory, the two most well-known situations in which this occurs are Jucys-Murphy-Young bases of representations of the symmetric group, and Gelfand-Tsetlin bases of representations of general linear groups (or other type A groups). All of this is closely related to the philosophy in integrable systems; in fact, many very interesting integrable systems (e.g. those of Calogero-Moser type) can be studied via representation-theoretic tools. Jack and Macdonald polynomials arise this way.

In the most well-studied situations, the pattern for producing large commutative families of operators goes as follows: you have a tower of (non-commutative) algebras $$A_1 \subseteq A_2 \subseteq A_3 \subseteq \cdots$$ and you attempt to study the restriction/induction rules for this tower. You realize that there are some obvious elements in the centralizer of $A_{n-1}$ inside $A_{n}$, which therefore produce endomorphisms of the restriction/induction functors. Collecting together all these "obvious" elements (for all $m \leq n$) in the various centralizers gives you a commutative subalgebra, of $A_n$, which is sometimes large enough to have one-dimensional eigenspaces. This happens in the Gelfand-Tsetlin and Jucys-Murphy-Young examples mentioned above.