Do the statements hold in an inner product space over $\mathbb R$ as well?

103 Views Asked by At

Let $V$ be an $n$-dimensional inner product space over $\mathbb C$ and $f\in \mathcal L (V)$ normal. Show that:

  • $f^2=f^3 \implies f=f^2 \implies f = f^*$
  • $f$ nilpotent $\implies f=0$

After doing the exercise, it occurred to be that the statements seem to hold in an inner product space over $\mathbb R$ as well. Is that true?

It's from this text I'm going through and the exercises don't have any redundant hypotheses, many have $\mathbb F \in \{ \mathbb R$, $\mathbb C \}$ in their setup (which would be appropriate here as well, if I am right).

2

There are 2 best solutions below

0
On

Further to my comment (now deleted, because it's reproduced here): $\newcommand{\norm}[1]{\left\lVert#1\right\rVert}$ $\DeclareMathOperator{\nnull}{null}$ $\DeclareMathOperator{\range}{range}$

Someone is sure to post a much cleaner answer to this question, but meanwhile, here's a not very quick but certainly very dirty answer!

I thought at first that your conjecture wasn't true, because when doing the similar exercise 7.10 in Axler's Linear Algebra Done Right (2nd ed.), I used the complex spectral theorem, and couldn't see how else to do it; also, Axler requires $V$ to be complex.

Exercise 8.7 in the same book is to show that if $f$ is self-adjoint and nilpotent, then $f = 0$. Unlike the other exercise, this one isn't stated with the requirement that $V$ be complex. You would need that hypothesis if you had needed it for the first part, and if you didn't otherwise know that $f$ is self-adjoint; however, it turns out not to be needed, i.e. you're quite right.

Axler's Theorem 7.25 characterises normal operators on real inner-product spaces by the existence of an orthonormal basis with respect to which the operator has a block diagonal matrix in which each block is either $1 \times 1$ or else a $2 \times 2$ matrix of the form: $$ A = \begin{pmatrix} a & -b \\ b & a \end{pmatrix} $$ with $b > 0$. So it is enough to show that if $a, b$ are real and $A^2 = A^3$ then $b = 0$.

The condition is equivalent to \begin{align*} a^2 - b^2 & = a^3 - 3ab^2, \\ 2ab & = 3a^2b - b^3. \end{align*} If $b \ne 0$, then the second equation gives $b^2 = a(3a - 2)$, and $a \ne 0$, so we can substitute for $b^2$ in the first equation, and cancel the factor $a$, giving: \begin{gather*} (3a - 1)(3a - 2) = a(a - 1), \\ \therefore\ (2a - 1)^2 = 0. \end{gather*} But if $a = \frac{1}{2}$, then $b^2 < 0$, a contradiction.

But it would be nice to see a properly mathematical answer to your question, instead of a mindless calculation like this!

(Also, please excuse me if this has all been complete nonsense. It's been years since I studied this subject, and I've never had a secure grasp of it. I wasn't sure I should attempt to post an answer, but hours had gone by without one appearing.)

Embarrassed afterthought: I suppose I should just have used the fact that unless $a = b = 0$, $A$ is invertible - essentially it's a complex number - therefore $A = I$. Oh, well!

Update: here's a more abstract, "cleaner" version of the latter argument.

Because $f$ is normal, $\norm{f(x)} = \norm{f^*(x)}$ for all $x \in V$. Hence, $\nnull{f} = \nnull{f^*}$. But, for all $f$ (whether normal or not), $\range{f} = (\nnull{f^*})^\perp$. Hence, for normal $f$, $\range{f} = (\nnull{f})^\perp$.

Hence, any normal $f: V \to V$ restricts to a linear map: $$ \hat{f}: (\nnull{f})^\perp \to (\nnull{f})^\perp = \range{f} = \range{\hat{f}}. $$ Because $\hat{f}$ is onto, and $V$ is finite-dimensional, $\hat{f}$ is $1-1$, therefore $\hat{f}$ is invertible.

If $f^2 = f^3$, then $\hat{f}^2 = \hat{f}^3$. Since $\hat{f}$ is invertible, it follows that $\hat{f}$ is the identity map on $(\nnull{f})^\perp$, i.e. $f$ is the orthogonal projection of $V$ onto $(\nnull{f})^\perp$.

0
On

The statements hold for real inner product spaces too.

Let $V$ be a real inner product space, and let $f\in\mathcal{L}(V)$ normal. Let $A$ be the matrix which represents $f$ with respect to an orthonormal basis. Then $A$ is a normal matrix, and hence the statements in question hold for $A$. It follows that the statements hold for $f$ as well.

The same argument can be expressed in a more "canonical" way. Let $V'=V\otimes_\mathbb{R}\mathbb{C}$ the complexification of $V$, and let $f'\in\mathcal{L}(V')$ be defined by $f'=f\otimes id$. Then $f'$ is normal, the statements hold for $f'$ and hence they hold for $f$ too.