Why is the spectrum usually defined for operators between Banach spaces?

1k Views Asked by At

The spectrum of a linear operator $L: \mathcal{D}(L) \rightarrow \mathcal{X} $ is generally defined for $\mathcal{X}$ a Banach space (as seen for example wikipedia on link above, or spectral decomposition on wikipedia, or this question or this answer).

Why is this? Why don't we define the spectrum more generally for operators between normed spaces? Where do we need the completeness?

2

There are 2 best solutions below

7
On BEST ANSWER

Considering the spectrum of unbounded operators as a very special definition that doesn't quite fit into the general definition of the spectrum as follows...


The basic ingredient for spectral theory in general is a unital algebra $1\in\mathcal{A}$.

Denote the set of invertibles by $\mathcal{A}^\ast$.

Then, the spectrum of an element is nothing but: $$A\in\mathcal{A}:\quad\sigma(A):=\{\lambda\in\mathbb{C}:A-\lambda 1\notin\mathcal{A}^\ast\}$$


Now, the bounded linear operators on a Banach space form a unital Banach algebra: $$1\in\mathcal{B}(E):=\{T:E\to E:T\text{ bounded, linear}\}$$

It is important, though, to have an identity operator; that forces the target space to agree with the domain! So though the bounded linear operators between different Banach spaces form a Banach algebra they miss an identity: $$1\notin\mathcal{B}(E,F)$$

Note also that not the Banach space itself is the structure being studied but the algebra of operators acting on the Banach space...

Of course, one could as well consider the bounded linear operators on a normed spaces: $$1\in\mathcal{B}(X):=\{T:X\to X:T\text{ bounded, linear}\}$$ or one could even consider the merely linear operators on a vector space: $$1\in\mathcal{L}(V):=\{T:V\to V:T\text{ linear}\}$$

However, the one lacks of completeness being ridiculously important and the other even misses topological structure at all.

10
On

Spectal Theory evolved with the thought in mind of expanding functions in eigenfunctions of some operator. Eigenfunctions and eigenvalues make no sense if you consider $L : X\rightarrow Y$ because $Lf=\lambda f$ makes no sense if $X\ne Y$. You can't have discrete eigenvalues and discrete eigenfunction expansions, or approximate eigenvvalues and continuous (integral) eigenfunction expansions because the statements no longer make sense if $X\ne Y$.

J. Dieudonne in A History of Functional Analysis details how Spectral Theory came out of looking at the ordinary differential equations arising out of Fourier's separation of variables. The separation parameter gave an eigenvalue equation $$ Lf=\lambda f. $$ For finite intervals one had discrete values $\lambda_{n}$ for which non-zero solutions $f_{n}$ existed, and it was found that these eigenfunctions were mutually 'orthogonal' in an integral sense whenever the eigenvalues were different. This led to a general expansion $$ f = \sum_{n} \frac{(f,f_{n})}{(f_{n},f_{n})}f_{n} $$ where the 'inner-product' pairing was $$ (f,g) = \int_{a}^{b}f(t)g(t)\,w(t)\,dt $$ for some weight $w$. Using such expansions greatly simplified the process of solving the original partial differential equations; using such functions amounted to a diagonalization of the operator. (By the way, systematic matrix diagonalization came out of these ODE theories, not the other way around.)