Equivalence of spectral theorems

95 Views Asked by At

I have recently started studying operator theory and am aware that there are different results called the spectral theorem. I am interested in the connection (possibly equivalence) between the following spectral theorem and the result which gives conditions under which the eigenfunctions of a linear hermitian operator can be used as a basis:

Theorem:

If $a \in \mathcal{A}:= \text{Banach Algebra}$ and $f \in \mathcal{Hol}(a)$. Then $$\sigma(f(a)) = f(\sigma(a)).$$

What is the connection between this theorem and result I gave above? The above result for a finite dimensional case could be stated as the conditions required for a Hermitian matrix to be diagonalizable.

Thanks.

2

There are 2 best solutions below

3
On

Let us restrict ourselves to finite dimensional spaces. Consider the example \begin{align} A = \begin{pmatrix} 2 & 1\\ 0 & 2 \end{pmatrix} \ \ \text{ and } \ \ B= \begin{pmatrix} 2&0\\ 0 & 3 \end{pmatrix}. \end{align} It's clear that $A$ is not diagonaliable (in particular, it's a Jordan block). Let $f(z) = e^z$, then we see that \begin{align} f(B)=e^B = \begin{pmatrix} e^2&0\\ 0 & e^3 \end{pmatrix} \end{align} which verifies the claim $\sigma(f(B)) =f(\sigma(B))$. In the other case, observe \begin{align} A = \begin{pmatrix} 2&0\\ 0 & 2 \end{pmatrix} + \begin{pmatrix} 0&1\\ 0 & 0 \end{pmatrix} = C+D \end{align} which are commuting matrices. Hence it follows \begin{align} e^A = e^Ce^D = \begin{pmatrix} e^2&0\\ 0 & e^2 \end{pmatrix} \begin{pmatrix} 1&1\\ 0 & 1 \end{pmatrix} = \begin{pmatrix} e^2&e^2\\ 0 & e^2 \end{pmatrix} \end{align} which again verifies $\sigma(f(A) )=f(\sigma(A))$. I hope this example illustrates that holomorphic functional calculus doesn't really talk about diagonalizability.

0
On

There isn't a direct connection. The spectral mapping mapping is more of an algebraic property in this particular case. To see why, suppose $\lambda\notin\sigma(a)$, and suppose $f$ is holomorphic on a neighborhood of $\sigma(a)$. Then $$ f(\mu)-f(\lambda)=(\mu-\lambda)g(\mu) $$ where $g$ is holomorphic on a neighborhood of $\sigma(a)$. This gives $$ f(a)-f(\lambda)e = (a-\lambda e)g(a) $$ Because $a-\lambda e$ is not invertible, then $f(a)-f(\lambda)e$ cannot be invertible, which forces $f(\lambda)\in\sigma(f(a))$, or $$ f(\sigma(a)) \subseteq \sigma(f(a)). $$ Conversely, if $\lambda\in\sigma(f(a))$, then there is a homomorphism on the algebra generate by $a,e$ such that $\lambda=\omega(f(a))=f(\omega(a))$. Hence $\sigma(f(a))\subseteq f(\sigma(a))$.