The sequence of $\left \{ f_{n} (x)\right \} $ satisfying $ \lim_{n \to \infty}\int_{-1}^{1} f_{n}(x)g(x)dx=g(0)$

228 Views Asked by At

$\textbf{Exercise}$

Let $\left \{ f_{n} \right \}$ be a sequence of real continuous functions defined on $\left[-1, 1 \right]$. If

$(1). $ $$\lim_{n \to \infty} \int_{-1}^{1} f_{n}(x)dx=1$$ $(2). $ $$\text{For any fixed } \delta \in \left(0, 1 \right) , f_{n}\rightarrow 0 \text{ uniformly on } [-1,-\delta]\cup[\delta,1]$$

then prove that for any continuous function $g(x)$ on $[-1,1]$

$$\lim_{n \to \infty}\int_{-1}^{1} f_{n}(x)g(x)dx=g(0)$$


What puzzles me is that the sequence of functions $f_{n}$ satisfying above conditions does not exist, let alone to prove $\lim_{n \to \infty}\int_{-1}^{1} f_{n}(x)g(x)dx=g(0)$.

Next I will outline why the continuous functions $f_{n}$ does not exist.


According to $(1)$ and $(2)$, for any fixed $\delta \in \left(0, 1 \right) $,

$$\lim_{n \to \infty} \int_{-\delta }^{\delta }f_{n}(x)dx=1$$

  • First, we can find a monotonically increasing subsequence of integers $\{n_{k}\}$ such that $\lim_{k \to \infty} f_{n_{k}}(0)=\infty$.

  • Secondly, there is a monotonically increasing subsequence of $\{n_{k}\}$, denoted $\{{n^{'}_{l}}\}$, such that $f_{n^{'}_{l}}(0)>l$.

  • Thirdly, there must exist $x_{l_{0}}\in (-1,1)$ such that $f_{n^{'}_{l_{0}}}(0)>l_{0}$ and $f_{n^{'}_{l_{0}}}(x_{l_{0}})<\frac{1}{l_{0}}$.

  • Finally, $f_{n^{'}_{l_{0}}}(x)$ satisfies the $\textit{intermediate value theorem}$ since it is continuous on $[-1,1]$. Therefore,${\color{Red} {\text{ we can find a monotonically decreasing }}}$${\color{red}{\textrm{ sequence } x_{m}\in (0,x_{l_{0}}),\textrm{ such that }}} $ $${\color{red}{x_{m}\rightarrow 0,m\rightarrow\infty } }\textrm{ and } f_{n^{'}_{l_{0}}}(x_{m})=\sum^{m}_{s=1}\frac{l_{0}}{2^{s}}+\frac{1}{l_{0}\cdot 2^m}.$$

To sum up, we get a paradoxical result $l_{0}=\lim_{m\rightarrow\infty}f_{n^{'}_{l_{0}}}(x_{m})=f_{n^{'}_{l_{0}}}(0)>l_{0}.$

THANKS FOR EVERYONE'S HELP! After carefully checking, I find the red part can not be guaranteed, so my demonstrate which attempted to claim that $\{f_{n}\}$ does not exist is actually false. The focus now should be on solving the exercise. If we can prove there is a sufficiently large integer $N_{0}$ and sufficiently small positive number $\delta$, such that for any $n> N_{0}$, $f_{n}(x)$ are non-negative (resp. non-positive) on $[-\delta,\delta]$, then the first mean value theorem for integrals will fix it out.

3

There are 3 best solutions below

3
On BEST ANSWER

EDIT:

The statement is not true if no $L^{1}$ boundedness of the integrals is given, see here.

So now I will prove the statement under the assumption that

\begin{align*} \sup_{n}\int_{-1}^{1}|f_{n}(x)|dx\leq M<\infty. \end{align*}

Fix a $\delta>0$, then \begin{align*} 1=\lim_{n}\int_{-1}^{1}f_{n}(x)dx=\lim_{n}\left(\int_{-1}^{-\delta}f_{n}(x)dx+\int_{-\delta}^{\delta}f_{n}(x)dx+\int_{\delta}^{1}f_{n}(x)dx\right). \end{align*} As $f_{n}\rightarrow 0$ uniformly on $[-1,-\delta]\cup[\delta,1]$, so the above expression becomes \begin{align*} \lim_{n}\int_{-\delta}^{\delta}f_{n}(x)dx=1. \end{align*} Now we have \begin{align*} &\int_{-1}^{1}f_{n}(x)g(x)dx\\ &=\int_{-1}^{-\delta}f_{n}(x)g(x)dx+\int_{\delta}^{1}f_{n}(x)g(x)dx+\int_{-\delta}^{\delta}f_{n}(x)(g(x)-g(0))dx\\ &~~~~+\left(\int_{-\delta}^{\delta}f_{n}(x)dx\right)g(0)\\ &=I_{n}+J_{n}+K_{n}+L_{n}. \end{align*} Note that $g$ is bounded, so $f_{n}g\rightarrow 0$ uniformly on $[-1,-\delta]\cup[\delta,1]$, so $I_{n},J_{n}\rightarrow 0$. Also note that $L_{n}\rightarrow g(0)$.

Given $\epsilon>0$, choose a $\delta>0$ such that \begin{align*} |g(x)-g(0)|<\epsilon,~~~~|x|\leq\delta, \end{align*} then \begin{align*} |K_{n}|\leq M\cdot\epsilon, \end{align*} taking $n\rightarrow\infty$, and then $\epsilon\rightarrow 0$, we see that $K_{n}\rightarrow 0$.

2
On

Many such sequences of functions exist. You can use step functions if you don't need continuity. You can use triangular functions if you need continuity but not differentiability.

If you need continuity and differentiability, you can use Gaussian curves: $$f_n(x) = \frac{\sqrt{n}}{\sqrt{2\pi \sigma^2}}\cdot e^{-n x^2/(2\sigma^2)}$$. For any $n$, the total area under the curve on $(-\infty, \infty)$ is $1$, and as $n$ increases, more and more of that area is concentrated near $0$. They are also uniformly continuous on $[-1,-\delta]$ and $[\delta, 1]$.

As the function goes to zero outside a narrow range around the origin, multiplying by g(x) will only use values of g(x) near $x=0$. If you expand $g(x)$ in a Taylor series, $g(x) = g(0) + g'(0)x + \frac {1}{2}g"(0)x^2+...$, all the terms except $g(0)$ get multiplied by smaller and smaller values and vanish, leaving only $g(0)$ times an area which goes to $1$.

2
On

This question reminds me the construction process of Dirac's delta function defined through the limit of functions $f_n(x)$ like $$ f_n(x)=\begin{cases} n-|n^2x|&,\quad |x|<1/n\\0&,\quad \text{elsewhere} \end{cases}. $$ A proof for general such $f_n(x)$ comes from splitting the integral $\int_{-1}^1 f_n(x)g(x)dx$ as $$ \int_{-1}^1 f_n(x)g(x)dx= \int_{-1}^{-\delta} f_n(x)g(x)dx + \int_{-\delta}^\delta f_n(x)g(x)dx + \int_{\delta}^1 f_n(x)g(x)dx. $$ Since $g(x)$ is continuous over $[-1,1]$, it is also bounded over $[-1,1]$ and there exists a large enough $M>0$ such that $$\forall x\in[-1,1],|g(x)|<M.$$ Due to the uniform convergence of $f_n(x)$ to zero over $[-1,-\delta]\cup[\delta,1]$, the first and third integrals vanish and we can conclude that $$ \forall \delta>0\ \ \ ,\ \ \ \lim_{n\to\infty}\int_{-\delta}^\delta f_n(x)g(x)dx=1. $$ Again, due to continuity of $g(x)$ over $[-\delta,\delta]$, we have $$ |\min_{x\in[-\delta,\delta]}g(x)-g(0)|<\epsilon_1 \\ |\max_{x\in[-\delta,\delta]}g(x)-g(0)|<\epsilon_2, $$ which yields the result you expect.