Intuitive explanation of proof of Abel's limit theorem

1.7k Views Asked by At

Assume the series $$f(x)=\sum_{n=0}^{\infty}a_n x^n$$ converges for $-r<x<r$. Abel's theorem says that if the series also converges at $x=r$ then $\lim_{x\to r-} f(x)$ exists and we have $$\lim_{x\to r-}f(x)=\sum_{n=0}^{\infty}a_n r^n$$. Moreover uniform convergence extends to $x=r$.

Every proof I've seen uses summation by parts and rewrites the series using either $c_n=\sum_{k=0}^n a_k$ or $A_n = \sum_{k=n}^{\infty}a_k$.

Is there a way to geometrically understand the proof of uniform convergence and continuity at $x=r$. What I mean is that if someone asked me to explain why summation by parts would work in the proof and why it would be related to the uniform convergence and continuity I would not have a good answer.

1

There are 1 best solutions below

7
On

Like a previous answer, this answer does not give geometric intuition, but I hope it shows how someone might reasonably have discovered the proof from scratch.

Note: I'm working WLOG in the situation where the $r = 1$.

One thing to notice right off the bat is that if you have some power series whose partial sums converge uniformly in some neighborhood of $x = 1$, then the coeffients of that series are necessarily summable. This is easy to see by applying a/the standard uniform convergence interchange of limits theorem at the limit point $x = 1$ (or by applying an easy $\epsilon / 3$-style argument).

So it's hopeless to try to find a proof which uses only the fact that the coefficients $a_n$ tend to zero as $n \to \infty$, and it's reasonable to look for a proof involving sums of the coefficients.

EDIT: Clarification of the above two paragraphs

  1. Sorry, I should've just said that the convergence of $\sum_n a_n$ is a stronger condition than $a_n \to 0$, so we should focus on somehow using $\sum_n a_n$ and not spend time trying to use $a_n \to 0$.
    Because...
  2. ... Because, if you could somehow prove uniform convergence of $f_n(x) = \sum_{k=1}^n a_k x_k$ to $f(x) = \sum_n a_n x_n$ on $(0,1)$ only from the hypothesis $a_n \to 0$ as $n \to \infty$, then you could actually prove the other hypothesis, namely that $\sum_n a_n$ converges. This isn't reasonable.
  3. See this question which references a uniform convergence interchange of limits theorem:

    Proof explanation of some theorem about uniform convergence

Someone who has played around a lot with power series may have noticed the following:

Lemma: If $\{ b_n \}$ is a sequence with $b_n \to 0$ as $n \to \infty$, then the sequence of functions $$ g_k(x) = (1 - x) \sum_{n = 0}^k b_n x^n $$ converges uniformly on $(0, 1)$ as to the function $$ g(x) = (1 - x) \sum_{n \ge 0} b_n x^n $$ The lemma is very easy to prove, since the radius of convergence of $\sum\limits_{n \ge 0} b_n x^n$ is at least one, and so absolute convergence gives $$ |g(x) - g_k(x)| \le (1-x) \sum_{k < n} |b_n| x^n \le (1-x) \sum_{k < n} \epsilon x^n = \epsilon x^{k+1} (1-x) \sum_{0 \le n} x^n = \epsilon x^{k+1} < \epsilon $$ since $|b_n| < \epsilon$ when $k$ is large enough.

You'd like to coerce the partial sums for $f(x)$ closer to the form in the lemma. There are a couple ways to approach it:

Approach #1: Use Summation By Parts \begin{equation*}\begin{aligned} f_k(x) & = \sum_{n = 0}^k a_n x^n \\ & = s_k x^{k+1} + \sum_{n = 0}^k s_n \left( x^n - x^{n+1} \right) \\ & = s_k x^{k+1} + (1 - x) \sum_{n = 0}^k s_n x^n \end{aligned}\end{equation*}

Approach #2: Multiply by $1 = (1 - x)(1 + x + x^2 + \cdots)$ \begin{equation*}\begin{aligned} f_k(x) & = \sum_{n = 0}^k a_n x^n \\ & = (1 - x)(1 + x + x^2 + \cdots) \sum_{n = 0}^k a_n x^n \\ & = (1 - x) \left( \sum_{n = 0}^k s_n x^n + \sum_{n \ge k + 1} s_k x^n \right) \\ & = (1 - x) \sum_{n = 0}^k s_n x^n + s_k (1 - x) \sum_{n \ge k + 1} x^n \\ & = (1 - x) \sum_{n = 0}^k s_n x^n + s_k x^{k+1} (1 - x)(1 + x + x^2 + \cdots) \\ & = s_k x^{k+1} + (1 - x) \sum_{n = 0}^k s_n x^n \end{aligned}\end{equation*}

That was the main trick, and either way we got $$ f_k(x) = s_k x^{k+1} + (1 - x) \sum_{n = 0}^k s_n x^n $$ We have made progress.

But now the issue is that in general it's not the case that $s_n \to 0$ as $n \to \infty$.

But it is the case that $(s_n - s) \to 0$, so:

\begin{equation*}\begin{aligned} f_k(x) & = s_k x^{k+1} + (1 - x) \sum_{n = 0}^k s_n x^n \\ & = s_k x^{k+1} + (1 - x) \sum_{n = 0}^k s x^n + (1 - x) \sum_{n = 0}^k (s_n - s) x^n \\ & = s_k x^{k+1} + (1 - x) s \sum_{n = 0}^k x^n + (1 - x) \sum_{n = 0}^k (s_n - s) x^n \\ & = s_k x^{k+1} + s (1 - x) \sum_{n = 0}^k x^n + (1 - x) \sum_{n = 0}^k (s_n - s) x^n \\ & = s_k x^{k+1} + s (1 - x^{k + 1})+ (1 - x) \sum_{n = 0}^k (s_n - s) x^n \\ & = \Bigg\{ s + (s_k - s) x^{k+1} \Bigg\} + \Bigg\{ (1 - x) \sum_{n = 0}^k (s_n - s) x^n \Bigg\} \\ & = \Bigg\{ \text{(A)} \Bigg\} + \Bigg\{ \text{(B)} \Bigg\} \end{aligned}\end{equation*} We are in very good shape now, since (A) is obviously uniformly convergent, and (B) is uniformly convergent by the lemma.