Proof of the spectral decomposition theorem in the finite dimensional case

245 Views Asked by At

Let

  • $H$ be a $\mathbb C$-Hilbert space with $\dim H\in\mathbb N$;
  • $A\in\mathfrak L(H)$ be normal;
  • $E_\lambda(A):=\mathcal N(\lambda-A)$ and $d_\lambda(A):=\dim E_\lambda(A)$ for $\lambda\in\mathbb C$.

I know that $\sigma(A)=\sigma_p(A)$ and $$E_\lambda(A)\perp E_\mu(A)\;\;\;\text{for all }\lambda,\mu\in\mathbb C\text{ with }\lambda\ne\mu.$$

Let $\left(e^{(\lambda)}_1,\ldots,e^{(\lambda)}_{d_\lambda(A)}\right)$ be an orthonormal basis of $E_\lambda(A)$ for $\lambda\in\{0\}\cup\sigma(A)$, $$U:=\operatorname{span}\underbrace{\left\{e^{(\lambda)}_i:1\le i\le d_\lambda(A)\text{ and }\lambda\in\sigma(A)\setminus\{0\}\right\}}_{=:\:B}$$

How can we show that

  1. $U=\mathcal R(A)$; and
  2. $E_0(A)\oplus U=H$?

I know how this can be proved$^1$ in the more general case of a possibly infinite-dimensional $H$ and a compact $A$. However, the argument is rather complicated and I'd like to know whether there is a shorter argumentation available in the present simplified setting.

I'd also like to know whether assuming that $A$ is self-adjoint further simplifies the argumentation.


$^1$ In the general case, letting $\tilde H:=E_0(A)\oplus\overline U$, we can show that $\tilde H^\perp$ is an invariant subspace of both $A$ and $A^\ast$. Since $\tilde H^\perp$ is closed, $T:=\left.A\right|_{\tilde H^\perp}$ is again compact and the spectral radius $r(T)$ of $T$ is given by $$r(T)=\max_{\lambda\in\sigma(T)}|\lambda|=\left\|T\right\|_{\mathfrak L(\tilde H^\perp)}\tag1.$$ Assume $T\ne0$. Then, by $(1)$, $\sigma(T)\setminus\{0\}\ne\emptyset$. Since $T$ is compact, $\sigma_p(T)\setminus\{0\}=\sigma(T)\setminus\{0\}$ and hence there is a $(\lambda,x)\in\mathbb C\setminus\{0\}\times\tilde H^\perp\setminus\{0\}$ with $$Tx=\lambda x\tag2.$$ But this implies $Ax=Tx=\lambda x$ and hence $\lambda\in\sigma_p(A)\setminus\{0\}$ and $x\in E_\lambda(A)\subseteq U\subseteq\overline U\subseteq\tilde H$; i.e. $$x\in\tilde H^\perp\cap\tilde H=\{0\};$$ in contradiction to $x\ne0$.

2

There are 2 best solutions below

2
On

Here's one argument. First, we note that $E_0(A) = E_0(A^*)$. If $A$ is self-adjoint, this is obvious. More generally, if $A$ is normal we have $$ Ax = 0 \iff \|Ax\|^2 = \langle A^*Ax,x \rangle = 0 \iff \langle AA^*x,x \rangle = \|A^*x\| = 0 \iff A^*x = 0. $$ Then, we use the general fact that, in the finite dimensional case, $\mathcal R(A) = \mathcal N(A^*)^\perp = E_0(A^*)^\perp = E_0(A)^\perp$; thus, we have $\mathcal R(A) \oplus E_0(A) = H$.

Now, the statement that $E_0 \oplus U = H$ is equivalent to the diagonalizability of $A$. In finite dimensional spaces, this is equivalent to the statement that for all $\lambda$, $\mathcal N(A - \lambda) = \mathcal N(A - \lambda)^2$. We have already proved that $\mathcal N(M) = \mathcal N(M^*)$ for all normal operators $M$. Thus, for $M = A-\lambda$, we see that $$ Mx = 0 \iff M^*Mx = 0 \iff M(Mx) = 0. $$ The conclusion follows.

0
On

In the finite dimensional case, when $A$ is self-adjoint, existence of an orthonormal basis of $H$ consisting of eigenvectors of $A$ can be proved by a simple inductive argument that uses the fundamental theorem of algebra:

By the fundamental theorem of algebra, there exists an eigenvector $u$ of $A$, and we can assume $\lVert u \rVert = 1$. Let $L = \text{span}(u)$. Since $A : L \to L$, it follows that $A^* : L^{\perp} \to L^{\perp}$. Hence $A : L^{\perp} \to L^{\perp}$. Induction gives us an orthonormal basis of of $L^{\perp}$ consisting of eigenvectors of $A$, and adjoining $u$ to it gives an orthonormal basis of $H$ consisting of eigenvectors of $A$. QED.

Now note that if $S, T$ are self adjoint and $ST = TS$, then the eigenspaces of $T$ are invariant under $S$. Thus a modification of the above inductive argument shows that there is an orthonormal basis of $H$ consisting of vectors that are simultaneously eigenvectors of $S$ and $T$.

Still assuming $H$ is finite dimensional, the case when $A$ is normal can now be easily deduced from the self-adjoint case:

Write $A = S + iT$, with $S, T$ self-adjoint ($S = \frac{A + A^*}{2}$). The hypothesis that $A^*A = AA^*$ implies $ST = TS$. Thus there is an orthonormal basis $B = \{u_1, \dots, u_n\}$ of $H$ consisting of vectors that are simultaneously eigenvectors of $S$ and $T$. Each $u_j$ is an eigenvector of $A$, so the proof is complete.