For a finite dimensional Hilbert space, is every automorphism "approximately inner"?

346 Views Asked by At

Given a finite dimensional Hilbert space $\mathcal H$ and an automorphism $U$ on $\mathcal{L(H)}$ (meaning $U$ is a linear isomorphism and $U(AB)=U(A)U(B)$), is it true that $U$ is the limit of an inner automorphism?

To me this sounds like something that should be a standard result, but I have not seen it stated explicitly before.

An inner automorphism is an automorphism of the form $U(A)=GAG^{-1}$ for some invertible $G\in \mathcal L(\mathcal H)$.

Is it possible to do it in infinite dimensional Hilbert spaces if one takes a suitable (not indiscrete) topology?

1

There are 1 best solutions below

4
On BEST ANSWER

It's more than that: it is inner.

We may consider $\mathcal {L(H)}=M_n(\mathbb C)$. I will denote the automorphism by $\phi$, so that there is no confusion with the matrices. Let $\{E_{kj}\}$ denote the matrix units associated with the canonical $\{e_1,\ldots,e_n\}$ basis of $\mathbb C^n$. These are the matrices with a $1$ in the $k,j$ entry and zeroes elsewhere. They satisfy $$ E_{kj}E_{st}=\delta_{js}\,E_{kt}, \ \ \ \ E_{kj}e_j=e_k. $$ and $E_{11}+\cdots+E_{nn}=I$, with each $E_{kk}$ a rank-one projection, and $E_{11},\ldots,E_{nn}$ pairwise orthogonal.

Because $\phi$ is multiplicative, $$ \phi(E_{kj})\phi(E_{st})=\delta_{js}\,\phi(E_{kt}). $$ Then $\phi(E_{11}),\ldots,\phi(E_{nn})$ are pairwise orthogonal rank-one idempotents that add to the identity (note that $\phi(I)=I$). Let $x_1\in\mathbb C^n$ be a vector with $\phi(E_{11})x_1=x_1$, and for $k\geq2$ define $$x_k=\phi(E_{k1})x_1.$$ These are all nonzero: if $x_k=0$, then $$0=\phi(E_{k1})x_1=\phi(E_{1k})\phi(E_{k1})x_1=\phi(E_{11})x_1=x_1,$$ a contradiction. The vectors $x_1,\ldots,x_n$ form a basis. Indeed, if $0=c_1x_1+\cdots+c_nx_n$, then for each $k$ \begin{align} 0&=\phi(E_{1k})(c_1x_1+\cdots+c_nx_n) =\phi(E_{1k})(c_1\phi(E_{11})x_1+c_2\phi(E_{21})x_1+\cdots+c_n\phi(E_{n1})x_1)\\ \ \\ &=c_k\phi(E_{1k})\phi(E_{k1})x_1=c_k\phi(E_{11})x_1=c_kx_1, \end{align} so $c_k=0$. We have $n$ linearly independent vectors, so a basis.

Now let $G$ be the change of basis from $x_1,\ldots,x_n$ to $e_1,\ldots,e_n$, that is $Gx_j=e_j$. Then, for each $k,j$, $$ \phi(E_{kj})Ge_j=\phi(E_{kj}x_j=\phi(E_{kj})\phi(E_{j1})x_1=\phi(E_{k1})x_1=x_k=Ge_k. $$ That is, $$ G^{-1}\phi(E_{kj})Ge_j=e_k. $$ So $$ G^{-1}\phi(E_{kj})G=E_{kj}.$$ As any matrix $A$ is the linear span of the matrix units $\{E_{kj}\}$, we get $$ G^{-1}\phi(A)G=A $$ for all $A\in M_n(\mathbb C)$, or $$ \phi(A)=GAG^{-1}. $$