Product of spherical tensors

107 Views Asked by At

There are many interesting ways of choosing a basis for a vector space of square matrices. This question is about one such way.

There is a unique up to isomorphism irreducible representation of $ SU(2) $ of every dimension $ 1,2,3 \dots $

Let $ j=0,1/2,1,3/2 \dots $ be a nonnegative half integer. In physics, a $ 2j+1 $ dimensional irrep of $ SU(2) $ is called a spin $ j $ system.

The action of $ g \in SU(2) $ on a spin $ j $ system is given by a $ (2j+1) \times (2j+1) $ unitary matrix called $ D^j(g) $.

Consider the vector space of $ (2j+1) \times (2j+1) $ square matrices. There is an action of $ SU(2) $ on this space by conjugation by $ D^j(g) $. For those who know the representation theory of $ SU(2) $ it will come as no surprise that this particular $ (2j+1)^2 $ dimensional representation of $ SU(2) $ is reducible and indeed can be decomposed into irreps as $$ (2j+1) \otimes (2j+1) = \bigoplus_{k=0}^{2j} 2k+1 $$ For example $ 4 \otimes 4= 7 \oplus 5 \oplus 3 \oplus 1 $.

In physics it is common to refer to the $ 2k+1 $ dimensional irreducible subrep of this representation as "the spherical tensors of rank $ k $" (the spin $ j $ is considered obvious from context). A basis of the space of rank $ k $, spin $ j $, spherical tensors is given by the set of matrices $ \{ T^k_q(j): -k\leq q \leq k \} $.

For those familiar with representation theory it will come as no surprise that spherical tensors of different rank $ k $ are orthogonal. Furthermore, the $ T^k_q(j) $ are constructed with respect to $ q $ so that all the $ T^k_q(j) $ are orthonormal. So the set $ \{T_q^k(j): 0 \leq k \leq 2j, -k \leq q \leq k \} $ is an orthonormal basis for the space of $ (2j+1) \times (2j+1) $ matrices.

Since the $ T^k_q(j) $ are a spanning set for the space of square matrices then we can always multiply them as matrices and express the result as a linear combination of other $ T^k_q(j) $.

So far nothing is too surprising.

What I find quite remarkable is that the product $ T^{k_1}_{q_1}(j) T^{k_2}_{q_2}(j) $ is always a linear combination of $ T^k_q $ for $ k $ in the range $ |k_1-k_2| \leq k \leq min \{ 2j,k_1+k_2 \} $. In this way the multiplication of the $ T^k_q(j) $ looks almost like it is in a graded algebra.

A reference for this fact is equation 16 of 2.4.4. page 45 in Quantum Theory of Angular Momentum by D A Varshalovich, A N Moskalev and V K Khersonskii. However no derivation, explanation or proof is provided in this reference.

Does this look familiar to anyone? Are there other instances, say for other Lie groups, where we can break the space of square matrices into irreps and then find a nice orthonormal basis which seems to be "graded" in this way? In other words, where the product of two basis matrices can be expressed as a sum of other basis matrices with degree/rank bounded by the degree/rank of the two original matrices?

If that question seems too vague, then does anyone have a derivation, explanation or proof for the fact $$ T^{k_1}_{q_1}(j) T^{k_2}_{q_2}(j)=\sum_{k=|k_1-k_2|}^{min \{ 2j,k_1+k_2 \}} C(k_1,q_1,k_2,q_2,k,q,j) T^k_q(j) $$ here $ C(k_1,q_1,k_2,q_2,k,q,j) $ is a coefficient with an explicit form in terms of the Clebsch-Gordan coefficients and 6j symbols, for details see the reference.

I'm just trying to understand this beautiful equation a little bit better!

1

There are 1 best solutions below

5
On BEST ANSWER

I denote the $(2j+1)$-dimensional representation by $S^{2j}\mathbb{C}^2$ (physicists, I believe, just write the dimension $\mathbf{2j+1}$ in bold).

Let $M_j=S^{2j}\mathbb{C}^2 \otimes S^{2j}\mathbb{C}^2$ be the representation on the space of $(2j+1)\times (2j+1)$ matrices in question. The matrix multiplication gives rise to a $\mathrm{SU}(2)$-equivariant map $M_j \otimes M_j \to M_j$ sending $x \otimes y$ to $xy$. If $x, y$ lie in the subrepresentations $S^{2k_1}\mathbb{C}^2$ and $S^{2k_2}\mathbb{C}^2$ respectively, then $x \otimes y$ lies in the tensor product $$S^{2k_1} \mathbb{C}^2 \otimes S^{2k_2} \mathbb{C}^2 \cong \bigoplus_{k = |k_1 - k_2|}^{k_1 + k_2} S^{2k} \mathbb{C}^2$$ so the nonzero coefficients can only appear if $|k_1 - k_2| \leq k \leq k_1 + k_2$, exactly as you notice.

Update: Here is a little more detail. I will use physicists' notation now because it seems you are more comfortable with it.

  1. We have $M_j = (\mathbf{2j+1})\otimes (\mathbf{2j+1})$ and the matrix multiplication, which is a map $M_j \times M_j \to M_j$. It is $\mathrm{SU}(2)$-equivariant in the sense $(g.x)(g.y) = g.(xy)$.

  2. $M_j$ decomposes as $M_j = \bigoplus_{k = 1}^{2j} (\mathbf{2k+1})$. We can restrict our multiplication map to $x \in (\mathbf{2k_1+1})$, $y \in (\mathbf{2k_2+1})$ and obtain a bilinear map from $(\mathbf{2k_1+1}) \times (\mathbf{2k_2+1})$ to $M_j = (\mathbf{2j+1})^{\otimes 2}$

  3. This bilinear map defines a linear map which maps tensors $x \otimes y \in (\mathbf{2k_1+1})\otimes (\mathbf{2k_2}+1) \subset M_j \otimes M_j$ to $xy \in M_j = (\mathbf{2j+1})^{\otimes 2}$.

  4. We have an equivariant map $f_{k_1, k_2}\colon (\mathbf{2k_1+1})\otimes (\mathbf{2k_2}+1) \to (\mathbf{2j+1})^{\otimes 2}$. Decompose the product on the left and on the right, obtain $$\bigoplus_{k=|k_1 - k_2|}^{k_1+k_2} (\mathbf{2k+1}) \to \bigoplus_{l = 0}^{2j} (\mathbf{2j+1})$$.

  5. Schur's lemma tells us that this map is nonzero only on common irreducible summands (so $\mathbf{2k + 1}$ with $|k_1 -k_2| \leq k \leq \min\{k_1 + k_2, 2j\}$), and on each irreducible summands it must be a multiple of identity: $f_{k_1, k_2}(z_k) = \lambda_{k_1, k_2, k} z_k$ for $z_k \in (\mathbf{2k + 1})$.

  6. So to multiply two matrices $x \in (\mathbf{2k_1+1}) \subset M_j$ and $y \in (\mathbf{2k_2+1}) \subset M_j$ we do the following: form a tensor product $x \otimes y$, decompose it into irreducibles $$x \otimes y = \sum_{k = |k_1-k_2|}^{min\{k_1+k_2, 2j\}} z_k$$ with $z_k \in (\mathbf{2k+1})$, multiply $z_k$ by some coefficients $\lambda_{k_1, k_2, k}$, and then form a sum $$\sum_k \lambda_{k_1, k_2, k} z_k \in \bigoplus_{l = 0}^{2j} (\mathbf{2j+1}) = M_j$$

6)The only thing we didn't determine are the coefficients $\lambda_{k_1, k_2, k}$. We can probably find them by putting some appropriately chosen matrices as $x$ and $y$ (maybe highest weight vectors).