If $A$ is normal with $\sigma(A)\subseteq \mathbb{R}\cup\mathbb{T}$, does $\text{dim ker}(AB-BA)=\text{dim ker}(A^*B-BA^*)$?

33 Views Asked by At

This clearly holds if $A$ is self-adjoint, and also if $A$ is unitary, because then $A(\text{ker}(AB-BA))=\text{ker}(A^*B-BA^*)$. To prove this, if $w\in\text{ker}(AB-BA)$, then $A^*B(Aw)=A^*ABw=Bw=BA^*(Aw)$, so $Aw\in\text{ker}(A^*B-BA^*)$ and if $v\in\text{ker}(A^*B-BA^*)$, then $v=AA^*v$ and with $w=A^*v$ we have $ABw=ABA^*v=AA^*Bv=Bv=BAA^*v=BAw$, so $w\in\text{ker}(AB-BA)$.

If $A$ is normal with spectrum contained in the union of the real line and the unit circle, then there several things one can try.

On the one hand, if we diagonalize $A=UDU^*$, then $D$ can be split into the sum of two diagonal matrices, one with the real entries and $0$'s else, and one with the entries on the unit circle and $0$'s else, $D=D_1+D_2$. Moreover, let $J$ be the diagonal matrix which has $1$'s where $D$ has real entries and $0$'s else, then $D=(D_1-J)+(D_2+J)$, so $A=U(D_1-J)U^*+U(D_2+J)U^*=:A_1+A_2$, where $A_1$ is self-adjoint and $A_2$ is unitary. In this case, $A^*=A_1+A_2^*$

Another possibility is to split up $D$ as the product of two diagonal matrices, one with the real entries and $1$'s where the entries from the unit circle were, and one with the entries from the unit circle and $1$'s where the real entries were, $D=D_1D_2$, which leads to $A=(UD_1U^*)(UD_2U^*)=:A_1A_2$ where $A_1$ is self-adjoint and $A_2$ is unitary again. In this case $A^*=A_1A_2^*$.

I couldn't prove the statement in both cases. It would be good to express $\text{dim ker}(A_1B-BA_1)$ and $\text{dim ker}(A_2B-BA_2)$ in terms of $\text{dim ker}(AB-BA)$.

1

There are 1 best solutions below

2
On BEST ANSWER

The statement is not true in general. Pick any four numbers $x,y,z,w$ on $\mathbb R\cup\mathbb T$ such that $$ \zeta=(y-z)\overline{(x-z)}(x-w)\overline{(y-w)} $$ is not a real number. E.g. when $x=1,y=i,z=-1$ and $w=0$, we have $\zeta=2(i-1)\not\in\mathbb R$. Let $$ A=\pmatrix{x\\ &y\\ &&z\\ &&&w}\quad\text{and}\quad B=\pmatrix{0&0&y-z&y-w\\ 0&0&x-z&x-w\\ 0&0&0&0\\ 0&0&0&0}. $$ Then $$ [A,B]=\pmatrix{0&P\\ 0&0}\quad\text{and}\quad [A^\ast,B]=\pmatrix{0&Q\\ 0&0} $$ where $$ P=\pmatrix{(x-z)(y-z)&(x-w)(y-w)\\ (x-z)(y-z)&(x-w)(y-w)}\quad\text{and}\quad Q=\pmatrix{\overline{(x-z)}(y-z)&\overline{(x-w)}(y-w)\\ (x-z)\overline{(y-z)}&(x-w)\overline{(y-w)}}. $$ Hence $\operatorname{rank}[A,B]=\operatorname{rank}(P)\le1$ but $\operatorname{rank}[A^\ast,B]=\operatorname{rank}(Q)=2$ because $\det(Q)=\zeta-\overline{\zeta}\ne0$.