Hermite Functions and Multiplication Operators: Why is the "obvious" equation correct?

330 Views Asked by At

Question: Let $T$ be the unbounded operator on $L^2(\mathbb{R})$ defined by $$Tf(x) = \frac{1}{\sqrt{2}} \left( x f(x) - f^{\prime}(x) \right), $$ with domain $\mathfrak{D}(T)$ restricted to the set where $xf(x)$ and $f^{\prime}(x)$ are each, separately, in $L^2(\mathbb{R})$ (and we permit $f^{\prime}(x)$ to be a distributional derivative).

It is known (e.g., Folland's Real Analysis, Exercise 8.23 (a -- c)) that $T h_k(x) = \sqrt{k + 1} h_{k + 1}(x)$, where $h_k(x)$ is the $k$th Hermite function (precise definition below).

Of course, the Hermite functions form an orthonormal basis of $L^2(\mathbb{R})$, so for any $f \in \mathfrak{D}(T)$, we may write $$f(x) = \sum_{k = 0}^{\infty} c_k h_k(x).$$

I would like to write then that for any $f \in \mathfrak{D}(T)$,

$$Tf(x) = \sum_{k = 0}^{\infty} c_k T h_k(x) = \sum_{k = 0}^{\infty} c_k \sqrt{k + 1} h_{k + 1}(x);$$ certainly, this would be true for finite sums.

How do I justify the linearity across the infinite sum?


Motivation: Let the (physicist's) Hermite functions be defined by $$h_k(x) = \frac{(-1)^k}{\sqrt{\sqrt{\pi} 2^k k!}} e^{x^2/2} \left( \frac{d}{dx} \right)^k e^{-x^2}, \quad k \in \mathbb{N} \cup \left\lbrace 0 \right\rbrace;$$ they form an orthonormal basis of $L^2(\mathbb{R})$.

We define $\ell^2$ to be the space of square-summable sequences, where we let the sequences be indexed starting at $0$ for convenience. Then by $h_k(x)$ an orthonormal basis of $L^2(\mathbb{R})$, we know that the map $\iota: \ell^2 \to L^2(\mathbb{R})$ given by \begin{equation} \iota \left( (c_k)_{k = 0}^{\infty} \right) := \sum_{k = 0}^{\infty} c_k h_k(x) \end{equation} is an isomorphism of Hilbert spaces.

I wish to relate conditions on the decay of the coefficients to the integrability of the resulting functions. Let $$\mathfrak{L}_1 = \left\lbrace (c_k)_{k = 0}^{\infty} \in \ell^2: (\sqrt{k + 1} c_k)_{k = 0}^{\infty} \in \ell^2 \right\rbrace$$ and $$\mathfrak{D}_1 = \left\lbrace f(x) \in L^2(\mathbb{R}): xf(x) \in L^2(\mathbb{R}), s \widehat{f}(s) \in L^2(\mathbb{R}) \right\rbrace,$$ where $\widehat{f}(s)$ is the Fourier transform of $f$, under the convention \begin{equation} \widehat{f}(s) = \frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}} f(x) e^{- i \pi x \cdot s} \, dx, \quad f \in L^1(\mathbb{R}). \end{equation}

I wish to show that $\iota (\mathfrak{L}_1) = \mathfrak{D}_1$.


I can handle the inclusion $\iota (\mathfrak{L}_1) \subseteq \mathfrak{D}_1$. We know that \begin{equation} \tag{1} x h_k(x) = \sqrt{\frac{k + 1}{2}} h_{k + 1}(x) + \sqrt{\frac{k}{2}}h_{k-1}(x), \end{equation} so again, for finite linear combinations of Hermite functions there is no issue. Moreover, for any $(c_k) \in \mathfrak{L}_1$, let $f = \iota (c_k)$, and let $P_n$ be the orthogonal projection onto $\text{span} \langle h_0, h_1, \dotsc , h_n \rangle$.

By (1), for all $n \in \mathbb{N}$, $P_n f$ is in the domain of the multiplication-by-$x$ operator, hereafter denoted $M_x$. Moreover, we discern the loose bound that for $m \leq n$, $m, n \in \mathbb{N}$, $$ \Vert M_x P_n f - M_x P_m f \Vert_{L^2}^2 \leq 2 \sum_{k = m - 1}^{n + 1} (k + 1) \vert c_k \vert^2, $$ so by $f = \iota(c_k)$ and $( \sqrt{k + 1} c_k) \in \ell^2$, we see that the sequence $( M_x P_n f)_{n = 1}^{\infty}$ is Cauchy in $L^2(\mathbb{R})$, hence convergent. Obviously $P_n f \to f$ in $L^2(\mathbb{R})$, and $M_x P_n f \to g$ for some $g \in L^2(\mathbb{R})$; by closedness of $M_x$, we have that $f$ is in the domain of $M_x$ and $M_x f = \lim_{n \to \infty} M_x P_n f$. In other words, $x f(x) \in L^2(\mathbb{R})$.

By our convention on the Fourier transform, we have that (e.g., Folland, Real Analysis, Exercise 23h) that \begin{equation} \tag{2} \widehat{h}_k(s) = (-i)^k h_k(s); \end{equation} Hence, showing $s \widehat{f}(s) \in L^2(\mathbb{R})$ follows in essentially the same way.


In my attempt to show $\mathfrak{D}_1 \subseteq \iota(\mathfrak{L}_1)$, I note first that for all $f \in \mathfrak{D}_1$, $f \in L^2(\mathbb{R})$ and $s \widehat{f}(s) \in L^2(\mathbb{R})$, then $f \in \mathcal{H}^1(\mathbb{R})$, the $L^2$-Sobolev space of first order, so in particular, the distributional derivative $f^{\prime}(x)$ is in $L^2(\mathbb{R})$.

Since for $f \in \mathfrak{D}_1$, we also have that $x f(x) \in L^2(\mathbb{R})$, we see that $f \in \mathfrak{D}(T)$. If the linearity holds, then by $Tf \in L^2(\mathbb{R})$, it will follow that $(\sqrt{k + 1} c_k) \in \ell^2$, and we will be done.

Perhaps (probably!) it is just a mental block, but I am not as certain how the infinite limit holds in this direction. Certainly $$ \langle T h_j, T h_k \rangle = (k + 1) \delta_{j, k}, $$ so we have some orthogonality to work with, but I do not see the passage to infinite sums, even under the hypothesis that $f \in \mathfrak{D}(T)$.

1

There are 1 best solutions below

4
On BEST ANSWER

The operator $Hf = -\frac{d^{2}}{dx^{2}}f+x^{2}f$ is selfadjoint when defined on the domain $\mathcal{D}(H)$ consisting of all twice absolutely continuous functions $f$ for which $f,Hf \in L^{2}(\mathbb{R})$. That is certainly far from obvious but, nevertheless, is true. In particular that means $$ (Hf,g)=(f,Hg),\;\;\; f, g \in \mathcal{D}(H). $$ The part that is even more interesting is that every $f \in \mathcal{D}(H)$ is such that $xf \in L^{2}$ and $f' \in L^{2}$, which allows the classical raising and lowering operators $L_{+}f=-f'+xf$ and $L_{-}f=f'+xf$ to be defined as valid Hilbert space operators on $\mathcal{D}(H)$. These operators are related to $H$ and almost give a factoring of $H$: \begin{align} L_{+}L_{-} & =-(f'+xf)'+x(f'+xf) \\ & =-f''-xf'-f+xf'+x^{2}f \\ & =(H-I)f \\ L_{-}L_{+} & = (-f'+xf)'+x(-f'+xf) \\ & = -f''+xf'+f-xf'+x^{2}f \\ & = (H+I)f. \end{align} Both $L_{+}$ and $L_{-}$ are well-defined on $\mathcal{D}(H)$ because of the properties give above. That makes these methods very sound Mathematically. Furthermore, it can be shown that the integration by parts terms vanish to give you $$ (Hf,g) = (f',g')+(xf',xg'),\;\;\; f,g\in\mathcal{D}(H). $$ That tells you right away that the spectrum of $H$ is real and positive because $Hf=\lambda f$ gives $$ \lambda\|f\|^{2}=(Hf,f)=\|f'\|^{2}+\|xf\|^{2}. $$ Therefore, if there is an $L^{2}$ eigenvector $f$, there is at least one $f_0$ with a minimum eigenvalue $\lambda_0 > 0$. That has an interesting consequence because of the raising and lowering operators. If there is such an $f_0$, then it is okay to apply any number of powers of $L_{+}$, $L_{-1}$ to $f_0$ because of the above discussion, and the result will also be in the domain of $H$. However, \begin{align} HL_{-}f_0 & = (L_{-}L_{+}-I)L_{-}f_0 \\ & = L_{-}(L_{+}L_{-}-I)f_0 \\ & = L_{-}((H-I)-I)f_0 \\ & = (\lambda-2)L_{-}f_0 \end{align} That's a problem because $L_{-}f_0$ becomes an eigenfunction with a lower eigenvalue. So you're left with the conclusion that the only way out of this trap is $L_{-}f_0=0$. That allows you to solve for $f_0$: $$ L_{-}f_0=f_0'+xf_0 = 0 \implies f_0 = Ce^{-x^{2}/2}. $$ Automatically, from $L_{+}L_{-}+I=H$ and $L_{-}f_0=0$, you get the lowest eigenvalue $\lambda_{0}=1$: $$ Hf_{0} = (L_{+}L_{-}+I)f_{0}=f_{0} $$ You can similarly show that $L_{+}f_0$ is an eigenfunction with a higher eigenvalue: $$ HL_{+}f_0 = (L_{+}L_{-}+I)L_{+}f_0 \\ = L_{+}(L_{-}L_{+}+I)f_0 \\ = L_{+}(H+2I)f_0 \\ = (\lambda_0+2)L_{+}f_0 $$ That is, $L_{+}f_0$ is an eigenvector with eigenvalue $\lambda_0+2$. And you can continue this problems to get a chain of eigenvectors $f_n=L_{+}^{n-1}f_0$ with eigenvalue $\lambda_0+2n$. And, $$ f_n = \left(-\frac{d}{dx}+x\right)^{n}f_0 $$ Using an integrating factor, you can write $$ (-\frac{d}{dx}+x)g = -e^{x^{2}/2}\frac{d}{dx}(e^{-x^{2}/2}g) $$ That's how you get $$ f_n = (-1)^{n}e^{x^{2}/2}\frac{d^{n}}{dx^{n}}(e^{-x^{2}/2}f_0) \\ = (-1)^{n}e^{x^{2}/2}\frac{d^{n}}{dx^{n}}e^{-x^{2}}. $$ Then you just need normalization constants $C_n$ so that get an orthonormal set $\{ C_n f_n \}_{n=0}^{\infty}$ of eigenfunctions: $$ \int_{-\infty}^{\infty}|C_nf_n|^{2}dx = 1. $$ There is no continuous spectrum, and you know there cannot be any other eigenfunctions. So this must form a complete orthonormal basis of $L^{2}(\mathbb{R})$. That means every $f\in L^{2}(\mathbb{R})$ can be written as the $L^{2}$ convergent sum $$ f = \sum_{n=0}^{\infty}(f,C_nf_n)C_n f_n $$ The ladder operators $L_{+}$ and $L_{-}$ map $\mathcal{D}(H)$ into $L^{2}$, as mentioned before. And you can show that $(L_{+}f,g)=(f,L_{-}g)$ for $f,g\in \mathcal{D}(H)$ using integration by parts. That allows you to expand \begin{align} L_{+}f & = \sum_{n=0}^{\infty}(L_{+}f,C_nf_n)C_nf_n \\ & = \sum_{n=0}^{\infty}(f,C_nL_{-}f_n))C_nf_n \\ & = \sum_{n=1}^{\infty}(f,C_n f_{n-1})C_n f_n \end{align} This sum is guaranteed to converge in $L^{2}$. You can do the same with $L_{-}$ as well. Then everything comes down to getting the constants right, such as writing $C_n f_{n-1}=\frac{C_n}{C_{n-1}}C_{n-1}f_{n-1}$. The integration by parts terms disappear because $f$, $f'$ are in $L^{2}$.