Pointwise and uniform convergence of series of funtions.

156 Views Asked by At

If I understand it right, uniform convergence by sequence of functions $\{f_n\}$ means, that there is a limit function $F$, and for any $\epsilon > 0$ we can always chose a high enough $n_0$ (the lower index of the function) such that for every $n > n_0$ and for every $x$, $f_n(x)$ is closer to the limit function (at corresponding $x$) than $\epsilon$.

I have a simple example on the domain $[-1, 1]$. The function sequence is $f_n(x) = x^{2n}$.

Now the limit function is: $f(x) = 1$ if $x=\pm 1$ , $0$ else.

They say the sequence converges only pointwise to the limit function. Why? I can choose whatever $\epsilon$, I just take a high enough $n$ and the every $x$ will be made very small close enough to $0$. And if $x=$1 then the limit function is equal to any member of the function sequence, so it is within the $\epsilon$ range too. Can you please explain this for me? Thank you!

2

There are 2 best solutions below

1
On BEST ANSWER

How about this: Each $f_n$ is continuous at 1. This means that there is some interval $(x_n,1]$, so that $f_n$ is greater than $3/4$ on $(x_n,1]$. Note that $x_n<1$ for each $n$.

Now if $f_n$ converged uniformly to $f$ (as defined in your question), then we would be able to pick some $n_0$ so that for each $n>n_0$ the $|f_n(x)-f(x)|<1/4$ for each $x \in [-1,1]$. Yet, for each $n$, on $(x_n,1)$ the difference $|f_n(x)-f(x)| = |f_n(x)-0|>3/4$. So we can't ever pick a large enough $n_0$ so that for every $n>n_0$, $f_n$ is within $1/4$ of $f$. Hence the convergence is not uniform.

3
On

The quick proof is the following: if a sequence of continuous functions converges uniformly to some limit function, then this limit function will also be continuous. In your case, your functions are continuous but the limit is not, therefore the convergence cannot be uniform.

If you require a "proof with $\varepsilon$ and $\delta$", nobody will have the patience to write that down...

Edit:

Okay, since you really want to understand this, let's try a not entirely rigurous but very visual explanation.

Fix an $\varepsilon$. Choose some interval $[-r_1, r_1] \subset [-1, 1]$. You can find a number $n_1$ such that (using your own notations) $|f_n - f| < \varepsilon$ on $[-r_1, r_1]$ for all $n \geq n_1$.

Now, if you strech your interval $[-r_1, r_1]$ attempting to cover the whole $[-1, 1]$ (i.e. you make $r_1 \to 1$), you will discover that there is a $r_1 < r_2 \leq 1$ such that $n_1$ is no longer good on $[-r_2, r_2]$ (i.e. it is too small). So, you replace it by a larger $n_2$.

You repeat the whole procedure with an even larger interval $[-r_3, r_3]$, and you discover that $n_2$ is no longer good for this interval, so you replace it by an even larger $n_3$ and so on.

The larger your interval is (i.e. the closer its endpoints are to $-1$ and $1$), the larger your $n_0$ will have to be - and this game accelerates the closer you get to $\pm 1$. By now you've probably guessed: by the time you reach $\pm 1$, your $n_0$ will also have reached $\infty$, which is no longer a natural number.