Showing that $\{ e_n \}_{n\in\mathbb Z}$ is an orthonormal basis

213 Views Asked by At

I have to show that $ \{ e_n \}_{n \in \mathbb Z} $, defined by

$$ e_n(x) := \frac{1}{\sqrt 2} \mathrm e^{\mathrm i \pi n x} $$

is an orthonormal basis of $L^2([-1,1])$, equipped with the scalar product $\langle f, g \rangle = \int_{-1}^1 \overline f g $.

I already proved that it is an orthogonal set and it remains to show completeness, i.e. that $0$ is the only vector that is orthogonal to all $e_n$.

As a hint, it is given to show that $ \mathrm{span}\{e_n\}_{n\in\mathbb Z}$ is dense in $C := \{ f \in C([-1,1]) : f(-1) = f(1) \} \cong C(S^1)$ w.r.t. the supremum norm by using Stone-Weierstrass.

I don’t want a full proof, rather I want to understand the rough idea. I already do not understand how proving denseness as recommended in the hint helps me in the proof of completeness.

I mean, i have to get from „Assume $\langle e_n, f \rangle = 0 \, \forall n \in \mathbb Z$.“ somehow to „That implies $f = 0$.“ What’s the idea in between and how does denseness of $C$ play a role here?

1

There are 1 best solutions below

0
On

Its enough to show $$||f||^2 = \lim_{N \rightarrow \infty} \frac{\sum_{k=0}^{N} \sum_{n=-k}^{k} |<f,e_n>|^2}{(N+1)^2}$$

Since this will imply that if $<f,e_n> = 0$ for all $n$ then $||f||^2 = 0 \implies f = 0$.

Inparticular its enough to show that as $L_2$ functions, $$f = \lim_{N \rightarrow \infty} \frac{\sum_{k=0}^{N} \sum_{n=-k}^{k} <f,e_n> e_n}{N+1}$$ This is because the above equation will imply $$||f||^2 = \lim_{N \rightarrow \infty} \frac{\sum_{k=0}^{N} \sum_{n=-k}^{k} |<f,e_n>|^2}{(N+1)^2}$$.

Let $$K_N(y) = \frac{\sum_{k = 0}^N \sum_{n=-k}^k e^{i\pi n (x-y)}}{N+1} = \frac{1}{N+1} \frac{sin^2(\pi(N+1)(x-y)/2)}{sin^2(\pi(x-y)/2)} $$.

It can be shown that $\lim_{N \rightarrow \infty} \int_{y \in [-1,1]} K_N(y) = 1 $

Inparticular for all $N \geq M$, $$||K_N||_1 = \int_{y \in [-1,1]} K_N(y) = \int_{y \in [-1,1]} |K_N(y)| < 1+\epsilon$$

Now define $$\sigma_{f,N}(x) = \frac{\sum_{k=0}^{N} \sum_{n=-k}^{k} <f,e_n> e_n}{N+1} = convolution(f,K_N)$$

Since continuous functions are dense in $L_2$, let $g$ be a continuous function such that $||g-f|| < \epsilon$.

Assume that fourier series converges for continuous functions, i.e., $$\sigma_{g,N}(x) \rightarrow g.$$

Hence we have $$||\sigma_{f,N}-f||_2 = ||\sigma_{f,N}-\sigma_{g,N}+\sigma_{g,N}-g + g - f||_2 \leq ||\sigma_{f,N}-\sigma_{g,N}||_2 + ||\sigma_{g,N}-g||_2 + ||g-f||_2 \leq ||\sigma_{f,N}-\sigma_{g,N}||_2 + 2 \epsilon \leq ||convolution(f,K_N)-convolution(g,K_N)||_2 + 2 \epsilon \leq ||convolution(f-g,K_N)||_2 + 2 \epsilon$$

By Young's inequality,https://en.wikipedia.org/wiki/Young%27s_convolution_inequality $$||convolution(f-g,K_N)||_2 \leq ||K_N||_1 \times ||f-g||_2 \leq 2 ||f-g||_2 \leq 2 \epsilon$$ $$||\sigma_{f,N}-f||_2 \leq ||convolution(f-g,K_N)||_2 + 2 \epsilon \leq 4 \epsilon $$ Hence $\sigma_{f,N} \rightarrow f$.

Hence we have as $L_2$ functions,

$$f = \lim_N \sigma_{f,N} = \lim_{N \rightarrow \infty} \frac{\sum_{k=0}^{N} \sum_{n=-k}^{k} <f,e_n> e_n}{N+1}$$

Now by the argument at the beginning of this answer, this concludes the proof.


In a typical proof of fourier series convergence, you will get stuck with interchanging limits and integral. In regard to this i have the following theorem of my own. See if this is useful for anyone. In particular if a $K_N(x) = g_N(x)$ as stated below, you can evaluate integrals hopefully easily.

$$\int g_N(x) \ dx \rightarrow \lim_{N \rightarrow \infty, x_N \rightarrow x^*} \frac{g_N(x_N)}{N}$$

For, $$ |g_N(x)| \leq \begin{cases} \frac{|g(x)|}{N},& \text{if } x \notin I_N\\ constant \times N, & x \in I_N \end{cases} $$ where $g(x)$ integrable and $I_N = (x^*-\frac{1}{N},x^*+\frac{1}{N})$.

$$\int g_N \rightarrow \lim_{n \rightarrow \infty, x_n \rightarrow x^*} \frac{g_N(x_N)}{n}$$

Proof:

$$\int f_N(x) dx = \int_{x^* \notin I_N} g_N(x) dx + \int_{x^* \in I_N} g_N(x) dx$$

$$|\int_{x^* \notin I_N} g_N(x) dx| \leq \frac{1}{N}\int_{x^* \notin I_N} |g(x)| dx \rightarrow 0$$

Hence $$\lim_N \int g_N(x) dx = \lim_N \int_{x^* \in I_N} g_N(x) dx$$

$$\frac{\inf_{x \in I_N} g_N(x)}{N} \leq \lim_N \int_{x^* \in I_N} g_N(x) dx \leq \frac{\sup_{x \in I_N} g_N(x)}{N}$$ $$\lim_N \int_{x^* \in I_N} g_N(x) dx = \lim_{N \rightarrow \infty, x_N \rightarrow x^*} \frac{g_N(x_N)}{N}.$$

You need the above limit to exist though or else need to use specific sequence $\{x_N\}$ depending on $g_N$.