I have some unclear things in mind related to the Riemann integrals theorem for the uniform convergent series and uniform convergent string.

66 Views Asked by At

"Let $\{f_n(x)\}$ be a string of definite functions and bordered on $[a,b]$. If the series $$\sum_{n=1}^\infty f_n(x)$$ is uniformly convergent and $f(x)$ is its sum, then $$\int_a^b f(x)dx=\sum_{n=1}^\infty \int_a^b f_n(x)dx$$ Proof. We have $$f(x)=\sum_{k=1}^n f_n(x)+R_n(x)$$ where $R_n(x)$ is the rest of the series. Using the additivity property of the integral for the functions, we can write $$\int_a^b f(x)dx=\sum_{k=1}^n \int_a^b f_k(x)dx+\int_a^bR_n(x)dx$$ For establishing the formula $$\int_a^b f(x)dx=\sum_{n=1}^\infty \int_a^b f_n(x)dx$$ we have to show that $$\lim_{n} \int_a^bR_n(x)dx=0$$ For this we have to take into account that because of the uniform convergence of the considered series, for every $\varepsilon>0$ there exist $m=m_\varepsilon$ so that $$\vert R_n(x)\vert<\frac{\varepsilon}{(b-a)}$$ if $n\ge m$. Using the property of monotony of the integral, it results that $$\vert \int_a^b R_n(x)dx \vert \le \int_a^b \vert R_n(x) \vert dx \lt \varepsilon$$ if $n\ge m$. So the formula is proved."


Ok, so that's the proof of the theorem from the Riemann integral chapter from an older book which is neither from USA nor UK and has never been translated into another languages and the theorem it's not that clear for me. First of all, we have a string of definite functions and bordered on $[a,b]$ which are $\{f_1(x), f_2(x), ...\}$. Taking into account that the series is $$\sum_{n=1}^\infty f_n(x)$$ then the string might look $$\{f_1(x), f_2(x), ...,\lim_{n\to\infty} f_n(x) \}$$ Also $$f(x)=f_1(x)+f_2(x)+ ...+\lim_{n\to\infty} f_n(x)$$ so $$\int_a^b f(x)dx=\sum_{n=1}^\infty \int_a^b f_n(x)dx$$ Ok, it seems clear for me up to now but we have to prove that. We start from $$f(x)=\sum_{k=1}^n f_n(x)+R_n(x)$$ and implicitly we have $$\int_a^b f(x)dx=\sum_{k=1}^n \int_a^b f_k(x)dx+\int_a^bR_n(x)dx$$ Clear up to now. But it says
"we have to show that $$\lim_{n} \int_a^bR_n(x)dx=0$$"
From the hypothesis we know $$f(x)=f_1(x)+f_2(x)+ ...+\lim_{n\to\infty} f_n(x)$$ so $$f(x)=f_1(x)+f_2(x)+ ...+\lim_{n\to\infty} f_n(x)+ \lim_{n\to\infty} R_n(x)$$ so $$\lim_{n\to\infty} R_n(x)=0$$ so $$\int_a^b\lim_{n\to\infty} R_n(x)dx=\int_a^b0dx $$ so $$\lim_{n\to\infty} \int_a^bR_n(x)dx=constant$$ and I assume that the constant can be $0$ so what we have to show is $$\lim_{n\to\infty} \int_a^bR_n(x)dx=0$$ so these things are similar? $$\lim_{n} \int_a^bR_n(x)dx=\lim_{n\to\infty} \int_a^bR_n(x)dx$$ Then it says
"For this we have to take into account that because of the uniform convergence of the considered series, for every $\varepsilon>0$ there exist $m=m_\varepsilon$ so that $$\vert R_n(x)\vert<\frac{\varepsilon}{(b-a)}$$ if $n\ge m$."
Two things unclear here. The first is the uniform convergence. Maybe for me the unclear things come from the fact that I haven't seen a concrete example with graphs for both cases. I read on lots of sites about the pointwise and uniform convergence and for me this was by far the most helpful thing I've read about this topic.
URL = {https://math.stackexchange.com/q/679981}
Now taking into account the definition for the uniform convergence. So for any $\varepsilon>0$ we choose there exist a natural number $N$ which in this case $N=m=m_\varepsilon$ so that $$\vert f_n(x)-f(x)\vert<\varepsilon$$ if $n\ge m$. But $$\vert f_n(x)-f(x)\vert<\varepsilon$$ for $n\to\infty$ can be written as $$\vert f_1(x)+f_2(x)+ ...+\lim_{n\to\infty} f_n(x)-[f_1(x)+f_2(x)+ ...+\lim_{n\to\infty} f_n(x)+\lim_{n\to\infty}R_n(x)]\vert<\varepsilon$$ which is $$\vert -\lim_{n\to\infty}R_n(x)\vert<\varepsilon$$ which is $$\lim_{n\to\infty}R_n(x)<\varepsilon$$ Above, after some calculations which I assumed they are similar to those from the book I said that what we have to show is $$\lim_{n\to\infty} \int_a^bR_n(x)dx=0$$ but the uniform convergence for $n\to\infty$ forces us to have this relation true $$\lim_{n\to\infty}R_n(x)<\varepsilon$$ so it's like the condition of uniform convergence validates the proof.
So the logic behind this proof it's a bit misleading for me. We have a hypothesis which states that $$\int_a^b f(x)dx=\sum_{n=1}^\infty \int_a^b f_n(x)dx$$ this is true under some circumstances which are the conditions for this to happen. We begin with a general form which is $$f(x)=\sum_{k=1}^n f_n(x)+R_n(x)$$ After some calculations we see that the hypothesis will be true if and only if $$\lim_{n\to\infty} \int_a^bR_n(x)dx=0$$ but we know from the conditions of the hypothesis to happen that $$\sum_{n=1}^\infty f_n(x)$$ is uniformly convergent which is true if and only if $$\lim_{n\to\infty}R_n(x)<\varepsilon$$ and this solves us the proof. So the logic behind this is a little bit misleading for me because it's like we want to prove that something happens under some circumstances and it's true because of those circumstances. It's like trying to prove something which is obvious.
Now continuing the penultimate paragraph. The uniform convergence led us to $$\lim_{n\to\infty}R_n(x)<\varepsilon$$ which is the same with $$ \frac {\lim_{n\to\infty}R_n(x)}{(b-a)}<\frac{\varepsilon}{(b-a)}$$ Knowing that $$\lim_{n\to\infty}R_n(x)=0$$ then $$\frac {\lim_{n\to\infty}R_n(x)}{(b-a)}=\lim_{n\to\infty}R_n(x)$$ so $$ \lim_{n\to\infty}R_n(x)<\frac{\varepsilon}{(b-a)}$$ so $$\vert \lim_{n\to\infty}R_n(x) \vert<\vert \frac{\varepsilon}{(b-a)} \vert$$ but$$ \frac{\varepsilon}{(b-a)}>0$$ so$$\vert \lim_{n\to\infty}R_n(x) \vert< \frac{\varepsilon}{(b-a)} $$ so $$ \lim_{n\to\infty} \vert R_n(x) \vert< \frac{\varepsilon}{(b-a)} $$ But in the book it writes $$\vert R_n(x)\vert<\frac{\varepsilon}{(b-a)}$$ And here it is the second thing unclear, are these similar? $$ \lim_{n\to\infty} \vert R_n(x)\vert<\frac{\varepsilon}{(b-a)}and \vert R_n(x)\vert<\frac{\varepsilon}{(b-a)}$$ And the final of the proof seems clear if what I assumed above as being the same with what's in the book is right.
"Using the property of monotony of the integral, it results that $$\vert \int_a^b R_n(x)dx \vert \le \int_a^b \vert R_n(x) \vert dx \lt \varepsilon$$ if $n\ge m$."
$$\vert \int_a^b R_n(x)dx \vert \le \int_a^b \vert R_n(x) \vert dx \lt \int_a^b \frac{\varepsilon}{(b-a)}dx=\frac{\varepsilon}{(b-a)}x| \begin{matrix} a\\b \end{matrix}=\frac{\varepsilon}{(b-a)}(b-a)=\varepsilon$$ if $n\ge m$.
Now the hypothesis $$\int_a^b f(x)dx=\sum_{n=1}^\infty \int_a^b f_n(x)dx$$ is proved but again ignoring the fact that above I called the logic behind this theorem as being misleading, looking at the theorem after the proof for me it seems like proving the additivity of integrals but we had that uniform convergence condition which helped us prove the hyopthesis, I mean can anyone explain me the intention of this theorem?
At the final of this theorem it is added "Observation. This theorem can be transposed also for convergent strings. It will look like: Let $\{f_n(x)\}$ be a string of bordered and continuous on $[a,b]$. If the string $\{f_n(x)\}$ converges uniformly on $[a,b]$ to a function f(x) then $$\int_a^b f(x)dx=\lim_{n} \int_a^b f_n(x)dx$$"
I don't get that observation as I don't understand to what value is close that n.