Is there a nice way to show that $$\mathfrak{so}(n)=\{A \in M(n,\mathbb{R}): A+A^t=0\} $$ has zero center for $n \geq 3$?
$\mathfrak{so}(n)$ has trivial center when $n\geq 3$
236 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
If one takes for granted the fact that the only matrices that commute with all real square matrices are scalar multiples of the identity matrix, one may argue as follows.
Every real diagonal matrix is the linear combination $I_n$ and some diagonal matrices with two $1$s and $n-2$ zeros on the diagonal, such as $\operatorname{diag}(1,1,0,\ldots,0)$. Let $r=\frac1{\sqrt{2}}$. Observe that $\operatorname{diag}(1,1,0,\ldots,0)$ can be decomposed into a sum of the form $$ \frac{-1}{1-2r}\pmatrix{r&-r\\ r&r\\ &&I_{n-2}} + \frac{1-r}{1-2r}\pmatrix{1\\ &1\\ &&I_{n-2}} + \frac{r}{1-2r} \pmatrix{&-1\\ 1&\\ &&I_{n-2}}. $$ Therefore, when $n\ge3$, every real symmetric matrix --- which is orthogonally diagonalisable --- is a linear combination of some special orthogonal matrices. If $A$ centralises $SO(n)$, $A$ must also commute with all symmetric matrices.
Furthermore, since each special orthogonal matrix is the matrix exponential of some skew-symmetric matrix $K$, if we evaluate the derivatives at $t=0$ on both sides of the equality $Ae^{tK}=e^{tK}A$, we see that $A$ also commutes with every skew-symmetric $K$.
Consequently, $A$ must commute with every matrix in $M_n(\mathbb R)$. Hence $A$ is a scalar multiple of the identity matrix. As the only scalar matrix in $\mathfrak{so}(n)$ is $0$, the proof is complete.
Let $A=(a_{ij})\in {\mathfrak{so}}(n)$ be in the center of ${\mathfrak{so}}(n)$. For convenience we will sometimes write $a[i,j]$ instead of $a_{ij}$.
Denote by $E_{xy}$ the matrix all of whose coefficients are zero, except the one at the intersection of the $x$-th line and the $y$-th column. In other words $E_{xy}=(\delta_{ix}\delta_{jy})_{1\leq i,j \leq n}$ where $\delta$ is the Kronecker symbol.
When $x\neq y$, we have $D_{xy}=E_{xy}-E_{yx}\in {\mathfrak{so}}(n)$, and
$$ \begin{array}{lcl} AE_{x,y}[i,j] &=& a_{ix}\delta_{yj} \\ AE_{y,x}[i,j] &=& a_{iy}\delta_{xj} \\ AD_{x,y}[i,j] &=& a_{ix}\delta_{yj}-a_{iy}\delta_{xj} \\ E_{x,y}A[i,j] &=& \delta_{ix}a_{yj} \\ E_{y,x}A[i,j] &=& \delta_{iy}a_{xj} \\ D_{x,y}A[i,j] &=& \delta_{ix}a_{yj}-\delta_{iy}a_{xj} \\ \end{array}\tag{1} $$
So we must have, whenever $x\neq y$ and $1\leq i,j \leq n$ :
$$ a_{ix}\delta_{yj}-a_{iy}\delta_{xj}=\delta_{ix}a_{yj}-\delta_{iy}a_{xj} \tag{2} $$
Let $u,v$ be two distinct indices between $1$ and $n$. Since $n\geq 3$, there is another index $w$ distinct from both $u$ and $v$. Using (2) with $x=w,y=u,i=w,j=v$ we obtain $$ 0=(a_{ww}\delta_{uv}-a_{wu}\delta_{wv})-(\delta_{ww}a_{uv}-\delta_{wu}a_{wv}) =-a_{uv}\tag{3} $$
And we see that $A$ must be diagonal. Next, using (2) with $x=v,y=u,i=v,j=u$ we obtain
$$ 0=(a_{vv}\delta_{uu}-a_{vu}\delta_{vu})-(\delta_{vv}a_{uu}-\delta_{vu}a_{vu})= a_{vv}-a_{uu} \tag{4} $$ So $A$ is a multiple of the identity, which is possible in ${\mathfrak{so}}(n)$ only if $A=0$.