Unitarily equivariant, linear, Hermitian maps on matrix algebras

82 Views Asked by At

I realized my question was not well-posed, hence I proceeded to rewrite it from scratch.

Denote by $\mathcal{M}_{n}(\mathbb{C})$ the $C^{*}$-algebra of complex matrices. Let $L\colon\mathcal{M}_{n}(\mathbb{C})\longrightarrow\mathcal{M}_{n}(\mathbb{C})$ be a linear map such that $L(\mathbf{A}^{\dagger})=(L(\mathbf{A}))^{\dagger}$ for every $\mathbf{A}\in\mathcal{M}_{n}(\mathbb{C})$, where $\dagger$ is the standard involution on $\mathcal{M}_{n}(\mathbb{C})$ given by taking the conjugate transpose.

Now, assume that $L$ is unitarily equivariant, that is, assume that $$ L(\mathbf{U\,A\,U^{\dagger}})\,=\,\mathbf{U}\,L(\mathbf{A})\,\mathbf{U}^{\dagger} $$ for all $\mathbf{A}\in\mathcal{M}_{n}(\mathbb{C})$ and for every unitary matrix $\mathbf{U}\in\mathcal{M}_{n}(\mathbb{C})$ (i.e., $\mathbf{U}\mathbf{\,U^{\dagger}}=\mathbb{I}$ where $\mathbb{I}$ is the identity matrix).

Clearly, $L(\mathbf{A})=\alpha\mathbf{A}$ with $\alpha\in\mathbb{R}$ is a linear map satisfying all these assumptions, but I would like to know if there are other non-trivial maps satisfying all these assumptions.

Any advice/suggestion/comment/solution is highly appreciated.

2

There are 2 best solutions below

1
On BEST ANSWER

Let $\{E_{kj}\}$ be the canonical matrix units.

Let $V$ be a unitary of the form $E_{11}\oplus U$ (i.e., $V=\begin{bmatrix} 1&0\\0& U\end{bmatrix}$). Then $VE_{11}V^*=E_{11}$, so $$ L(E_{11})=L(VE_{11}V^*)=VLE_{11}V^*. $$ So $L(E_{11})$ commutes with all such $V$. As we are free to choose $U$ to be any $(n-1)\times(n-1)$ unitary, we get that $$ L(E_{11})=\gamma_1 E_{11} +\delta_1 (I-E_{11}). $$ Now repeat this for each of $E_{22},\ldots,E_{nn}$. It follows that, for any $\alpha=(\alpha_1,\ldots,\alpha_n)$ there exists $\beta$ such that $$ L(\sum_{j=1}^n \alpha_j E_{jj})=\sum_{j=1}^n \beta_jE_{jj}. $$ As $L$ is linear, it follow that the map $\beta=T\alpha$ is linear. Now let $S$ be a permutation (which is a unitary!). We have $$ L(\sum_j (S\alpha)E_{jj})=L\left(S(\sum_j\alpha_jE_{jj})S^*\right) =S\,L\left(\sum_j\alpha_jE_{jj}\right)\,S^*=\sum_j (S\beta)_jE_{jj}. $$ Looking at the coefficients, we have that $ST\alpha=S\beta=TS\alpha$. We can do this for any $\alpha\in\mathbb C^n$, so we have that $TS=ST$. As this occurs for any permutation $S$, it follows that $T=\alpha I$ for some $\alpha\in\mathbb C$. Thus $$ L(\sum_{j=1}^n \alpha_j E_{jj})=\alpha\,\sum_{j=1}^n \alpha_jE_{jj}. $$ Now let $A$ be selfadjoint. Then there exists a unitary $U$ such that $A=U\left(\sum_j \alpha_j E_{jj}\right)U^*$. Then $$ L(A)=L\left(U\left(\sum_j \alpha_j E_{jj}\right)U^*\right)=UL\left(\sum_j \alpha_j E_{jj}\right)U^*=\alpha\,U\left(\sum_j \alpha_j E_{jj}\right)U^*=\alpha A. $$ As $L$ is linear and the selfadjoint matrices span all of $M_n(\mathbb C)$, we get that $L(A)=\alpha A$ for all $A$.

If you require that $L(A^*)=L(A)^*$, then $\alpha I=L(I)=L(I)^*=(\alpha I)^*=\bar\alpha I$. That is, $\alpha\in\mathbb R$.

0
On

With respect, Martin's answer works too hard. The second condition says that $L$ is an endomorphism of $M_n(\mathbb{C})$ as a complex representation of the unitary group $U(n)$. So of course we should try to understand the decomposition of this representation into irreducibles. First, the $n = 1$ case is somewhat degenerate and should be done separately. Now we assume that $n \ge 2$. $M_n(\mathbb{C})$ has a ($GL_n(\mathbb{C})$-equivariant) trace map $\text{tr} : M_n(\mathbb{C}) \to \mathbb{C}$ which splits it as a $U(n)$-representation into a direct sum

$$M_n(\mathbb{C}) \cong \mathfrak{sl}_n(\mathbb{C}) \oplus 1$$

where $1$ denotes the trivial representation, here concretely the subrepresentation spanned by the identity, and $\mathfrak{sl}_n(\mathbb{C})$ is the kernel of the trace map, so the space of traceless matrices.

Claim: $\mathfrak{sl}_n(\mathbb{C})$ is irreducible as a complex representation of $U(n)$.

Proof. Irreducibility as a complex $U(n)$-representation is equivalent to irreducibility as a representation of the (real) Lie algebra $\mathfrak{u}(n)$, which is equivalent to irreducibility as a representation of the complexification $\mathfrak{u}(n) \otimes \mathbb{C} \cong \mathfrak{gl}_n(\mathbb{C})$. Now irreducibility follows from the observation that $\mathfrak{sl}_n(\mathbb{C})$ is simple. $\Box$

It follows by Schur's lemma that $L$ consists of multiplication by two scalars $\alpha, \beta$, one on $\mathfrak{sl}_n(\mathbb{C})$ and one on the trivial representation spanned by the identity matrix $1$. More explicitly,

$$L(A) = L \left( \left( A - \frac{\text{tr}(A)}{n} I \right) + \frac{\text{tr}(A)}{n} I \right) = \alpha \left( A - \frac{\text{tr}(A)}{n} I \right) + \beta \frac{\text{tr}(A)}{n} I.$$

This is a little cleaner if we write $\gamma = \frac{\beta - \alpha}{n}$ which lets us write

$$L(A) = \alpha A + \gamma \, \text{tr}(A) I.$$

Now we consider the additional condition that $L$ commutes with adjoints. Since the adjoint decomposes $M_n(\mathbb{C})$ into two isotypic components, the self-adjoint and skew-adjoint matrices (note that this is a real decomposition; the adjoint is only real-linear), this is equivalent to the condition that $L$ preserves self-adjoint and skew-adjoint matrices. If we take $A$ to be self-adjoint with trace zero we get that $L(A) = \alpha A$ must continue to be self-adjoint which gives that $\alpha$ is real. Next, if we take $A$ to be skew-adjoint with nonzero trace (which must be purely imaginary) we get that $L(A) = \alpha A + \gamma \, \text{tr}(A) I$ must continue to be skew-adjoint which gives $\gamma = 0$. The conclusion follows.