Confusion about the Jordan-Chevalley/Dunford decomposition in $\mathbb{R}$, example of a rotation (solved!!!)

139 Views Asked by At

I'm writing some notes on Jordan-Chevalley decompositions in which I want to treat both the real and complex case in one statement.

One could of course write as in the french wikipedia article: an endomorphism $u\in \mathrm{End}(V)$ splits (i.e. its minimal polynomials splits into factor of degree 1) iff $\exists\ (d,n),\ d$ diagonalizable, $n$ nilpotent, $dn=nd$ such that $u=d+n$.

But it seems to me that the notion of semi-simplicity precisely allows to get rid of the assumption that the minimal polynomial splits. Any endomorphisms can be written as a semi-simple + a nilpotent, with unicity of that decomposition if both commute. One does find such a statement e.g. here or english wikipedia but I'm not familiar with field extensions, perfect field etc... One also finds a statement exact for $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, Thm 19.21 p.614 of "Mathématiques pour l'agrégation, Algèbre et géométrie", Jean-Etienne Rombaldi.

The idea is to adapt the $\mathbb{K}=\mathbb{C}$ case to $\mathbb{R}$: for $u\in \mathrm{End}(V)$ where $V$ is a $\mathbb{R}$-vector space, consider its complexification and the extension of $u$ to it. In terms of matrices, that of $u$ remains with real coefficients. Take now its Jordan-Chevalley decomposition as well as that of its conjugate: $u=d+n =\overline{u}=\overline{d} + \overline{n}$. Now by unicity we must have $d=\overline{d}$ and $n=\overline{n}$.

It looks like an honest proof but let us look at the example of a $2\times 2$ rotation $u=\begin{pmatrix} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta\end{pmatrix}$


I thought there was a problem because I remembered that in $\mathbb{C}^2$ two linearly independent eigenvectors are provided by $ \begin{pmatrix} 1 \\ i \end{pmatrix},\ \begin{pmatrix} 1 \\ -i \end{pmatrix}$ so I had in mind that the diagonal part was $\begin{pmatrix} e^{i\theta} & 0 \\ 0 & e^{-i\theta} \end{pmatrix}$ which does not satisfy $\overline{d}=d$. Moreover, going through the proof of the theorem, we see that $d$ is defined by $\displaystyle d:= \sum_{i=1}^r \lambda_i\, \Pi_i$ where $\Pi_i$ are the projection on the "generalized eigenspaces" $\operatorname{Ker}(u-\lambda_i)^{m_i}$ so I thought that such a $d$ was diagonal. But it is not, it is just diagonalizable (i.e. some extension of the original field, actually this could be an interpretation of semi-simplicity...)!!

Answer: $d=u$ in our example. Minimal polynomial $\mu_u= X - 2\, \cos\theta +1 = (X-e^{i\theta}) (X-e^{-i\theta})$. We do have $E_1:=\operatorname{Ker}(u- e^{i\theta})= \mathbb{C}\, \begin{pmatrix} 1 \\ -i \end{pmatrix} $ and $E_2:=\operatorname{Ker}(u- e^{-i\theta})= \mathbb{C}\, \begin{pmatrix} 1 \\ i \end{pmatrix} $ which cannot be understood in the $\mathbb{R}$-vector space framework but let us work out the projections.

One finds by trial and error the following Bézout relation expressing coprimality: $\frac{1}{2i \sin\theta}(X-e^{-i\theta}) - \frac{1}{2i \sin\theta} (X-e^{i\theta}) =1$ from which we deduce that the projection on $E_1$ is $$\Pi_1:=\frac{1}{2i \sin\theta}(u-e^{-i\theta})= \frac{1}{2i \sin\theta}\begin{pmatrix} \cos\theta -e^{-i\theta} & -\sin \theta\\ \sin \theta & \cos \theta -e^{-i\theta}\end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & i \\ -i & 1\end{pmatrix} $$ and $$\Pi_2:= - \frac{1}{2i \sin\theta}(u-e^{i\theta})= -\frac{1}{2i \sin\theta}\begin{pmatrix} \cos\theta -e^{i\theta} & -\sin \theta\\ \sin \theta & \cos \theta -e^{i\theta}\end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & -i \\ i & 1\end{pmatrix} $$ We check that $\Pi_i^2 = \Pi_i$ and magical part $$e^{i\theta}\, \Pi_1 + e^{-i\theta}\, \Pi_2 = \frac{1}{2} \begin{pmatrix} e^{i\theta}+ e^{-i\theta} & ie^{i\theta}-i e^{-i\theta} \\ -ie^{i\theta}+i e^{-i\theta} & e^{i\theta}+ e^{-i\theta} \end{pmatrix}=u$$