Adjoints of normal transformations on real inner product spaces

59 Views Asked by At

Can someone please check me on the following result and proof?

Let $E$ be a non-zero finite-dimensional inner product space over $\mathbb{R}$. A linear transformation $\varphi:E\to E$ is normal if and only if its adjoint $\varphi^*$ can be written in the form $\varphi^*=f(\varphi)$ for some polynomial $f\in\mathbb{R}[t]$.

Proof: If $\varphi^*=f(\varphi)$, then $\varphi^*\varphi=\varphi\varphi^*$, so $\varphi$ is normal. Conversely, if $\varphi$ is normal let $$E=E_1\oplus\cdots\oplus E_r\tag{1}$$ be the generalized eigenspace decomposition of $E$ under $\varphi$ and $\varphi_i:E_i\to E_i$ the induced restriction. Since $\varphi$ is normal, (1) is an orthogonal decomposition and $\varphi_i$ is a scalar multiple of an isometry (see [1], p. 437).

Write $\varphi_i=\lambda_i\tau_i$ where $\lambda_i\in\mathbb{R}$ and $\tau_i$ is an isometry. We know $(\tau_i)^*=\tau_i^{-1}=f_i(\tau_i)$ for some $f_i\in\mathbb{R}[t]$. Since (1) is orthogonal, we have the direct sum $$\varphi^*=\sum(\varphi_i)^*=\sum\lambda_i(\tau_i)^*=\sum\lambda_i f_i(\tau_i)\tag{2}$$ Define $g_i\in\mathbb{R}[t]$ by $$g_i(t)=\begin{cases} 0&\text{if }\lambda_i=0\\ \lambda_if_i(\lambda_i^{-1}t)&\text{if }\lambda_i\ne0 \end{cases}$$ Let $\pi_i$ be the $i$-th projection operator for (1). Then $\pi_i=h_i(\varphi)$ for some $h_i\in\mathbb{R}[t]$. It follows from (2) that $$\varphi^*=\sum g_i(\varphi)h_i(\varphi)$$ Setting $f=\sum g_ih_i$, we have $\varphi^*=f(\varphi)$.

Note: I know that the result is true in the complex case but I am interested in the real case.

References:

  1. Greub, W. Linear Algebra, 4th ed. Springer, 1975.
1

There are 1 best solutions below

0
On BEST ANSWER

Your proof works. Alternatively, you can simply derive the real case from the complex case. Complexifying $E$, by the complex case (which is an immediate consequence of the spectral theorem for normal operators and Lagrange interpolation) there is a polynomial $g\in\mathbb{C}[t]$ such that $\varphi^*=g(\varphi)$. Now let us pick a basis for $E$, so we can represent $\varphi$ as a matrix with real entries. Note that since $\varphi$ and $\varphi^*$ have real entries, we have $$\varphi^*=\overline{\varphi^*}=\overline{g}(\overline{\varphi})=\overline{g}(\varphi)$$ as well. Here the overline on a matrix denotes entrywise complex conjugation, and $\overline{g}$ is the polynomial obtained by conjugating each coefficient of $g$. We thus have $$\varphi^*=\frac{g(\varphi)+\overline{g}(\varphi)}{2},$$ i.e. $\varphi^*=f(\varphi)$ where $f=\frac{g+\overline{g}}{2}$ is a polynomial with real coefficients.

(Or alternatively, instead of thinking in terms of conjugating matrices, you can note that $g$ is chosen such that $g(a)=\overline{a}$ for each complex eigenvalue $a$ of $\varphi$. Now for each such $a$, $\overline{a}$ is also an eigenvalue of $\varphi$. Thus $g(\overline{a})=a$ so $\overline{g}(a)=\overline{g(\overline{a})}=\overline{a}$. That is, $\overline{g}$ also conjugates each eigenvalue of $\varphi$ and thus so does $\frac{g+\overline{g}}{2}$, which has real coefficients.)