Essentially self adjoint operators: a verification procedure

273 Views Asked by At

$\langle \ .,. \rangle : \scr H \! \times \! \scr H \rightarrow \mathbb C$ is the inner product.

Let $T : {\cal D}(T) \rightarrow \scr H $ be a linear symmetric operator, so:

  • ${\cal D} (T)$ is dense in $\scr H$
  • $\langle \phi, T\psi \rangle = \langle T\phi, \psi \rangle \quad \forall \phi, \psi \in {\cal D}(T)$

The following properties are equivalent:

  1. $T$ is essentially self-adjoint: $\ T^* = \overline T $
  2. ${\rm Ker} (T^* \pm \lambda I ) \equiv \{ 0\} $

If I have understood correctly, point 2. of the theorem is telling us that $(T^* \pm \lambda I)\psi = 0 \Longleftrightarrow \psi =0 $, or, to put it in other terms, the only eigenvector satisfying $$\ T^* \psi = \mp \lambda \psi $$ is the null vector.

My professor: "It's therefore evident that if we want to check if any symmetric operator $T$ admits a unique self-adjoint extension - its closure - we just have to make sure $\pm i$ are not eigenvalues of $T^*$"

Is this really enough? And what in the world has it to do with the kernel triviality?

2

There are 2 best solutions below

0
On BEST ANSWER

Suppose $T : \mathcal{D}(T)\subset \mathcal{H}\rightarrow\mathcal{H}$ is closed, densely-defined and symmetric. (It is reasonable to assume that $T$ is closed because, if $T$ is symmetric, $T$ has a closure that is also symmetric.) Then the graph $\mathcal{G}(T)$ of $T$ is a closed subspace of $\mathcal{G}(T^*)$. So there is a closed subspace $M$ of $\mathcal{H}\times\mathcal{H}$ such that one has the following orthogonal decomposition in $\mathcal{H}\times\mathcal{H}$: $$ \mathcal{G}(T)\oplus M=\mathcal{G}(T^*). $$ If $(m,n)\in M$, then $(m,n)\in\mathcal{G}(T^*)$, which gives $n=T^*m$. And $(m,T^*m)\perp\mathcal{G}(T)$ iff $$ \langle (x,Tx),(m,T^*m)\rangle_{\mathcal{H}\times\mathcal{H}}=0,\;\;\; x\in\mathcal{D}(T). $$ That is, $$ \langle x,m\rangle+\langle Tx,T^*m\rangle=0,\;\; x\in\mathcal{D}(T). $$ This is equivalent to $$ \langle Tx,T^*m\rangle = \langle x,-m\rangle,\;\;\; x\in\mathcal{D}(T). $$ Therefore, $T^*m\in\mathcal{D}(T^*)$ and $(T^*)^2m+m=0$. Any such $m$ may be written as $m_{+}+m_{-}$ where $T^*m_{+}=im_{+}$ and $T^*m_{-}=-im_{-}$. The decomposition is $$ m=\frac{1}{i}(iI+T^*)m+\frac{1}{i}(iI-T^*)m, \\ m_{+}=\frac{1}{i}(iI+T^*)m,\;\;\; m_{-}=\frac{1}{i}(iI-T^*)m. $$ This gives the following orthogonal decomposition decomposition in $\mathcal{H}\times\mathcal{H}$: $$ \mathcal{G}(T)\oplus\mathcal{N}(T^*+iI)\oplus\mathcal{N}(T^*-iI)=\mathcal{G}(T^*). $$ From this it becomes evident that $T$ is self-adjoint iff $\mathcal{N}(T^*\pm iI)=\{0\}$. You can substitute $T$ with $\frac{1}{\Im\lambda}(T-\Re\lambda I)$ in order to obtain the result with a more general $\lambda\notin\mathcal{R}$, not just with $\lambda=i$.

2
On

Your $\ \lambda\ $ here should be $\ i\ $. That $\ T\ $ is essentially self-adjoint if and only if both $\ T^*+iI\ $ and $\ T^*-iI\ $ have trivial kernels is a well-known result of functional analysis. See Corollary $9.22$ in Hall's Quantum Theory for Mathematicians, for instance.