Limit Of A Sequence Formal Proof

95 Views Asked by At

Let $\left (x_n \right )_{n=1}^{\infty}$ be a convergent sequence in $\mathbb{R}$ with limit $x$. Show that $\lim_{x\rightarrow \infty}\frac{1}{n}\sum_{k=1}^{n}x_k = x$.

I had been working at it for awhile and was completely lost upon looking at the solution I had many questions even after staring at it for awhile, the solution states:

Let $\varepsilon > 0$. As $\lim_{x\rightarrow \infty}x_n = x$, there is $n_1 \in \mathbb{N}$ s.t. $|x_n -x| < \frac{\varepsilon}{2}$ for $n \ge n_1$.

Choose $n_2 \in \mathbb{N}$ s.t. $$\frac{1}{n_2}\left|\sum_{k=1}^{n_1 - 1}(x_k-x)\right| < \frac{\varepsilon}{2}. \tag{1}$$

Set $n_\varepsilon := \max\{n_1,n_2\}$. For $n \ge n_\epsilon$, we have: \begin{align*} \left|\sum_{k=1}^{n}x_k-x\right| &= \left|\sum_{k=1}^{n_1 -1}(x_k-x)\right| \\ &\le \frac{1}{n}\left|\sum_{k=1}^{n}x_k-x\right| + \frac{1}{n}\sum_{k=n_1}^{n}|x_k-x| \\ &\le \frac{1}{n_2}\left|\sum_{k=1}^{n}x_k-x\right| + \frac{1}{n}\sum_{k=n_1}^{n}|x_k-x| \\ &< \frac{\varepsilon}{2} + \frac{n + 1 -n_1}{n} \max_{k=n_1,\ldots, n} |x_k - x| \\ &< \frac{\epsilon}{2} + \frac{\epsilon}{2} \\ &< \epsilon. \end{align*}

The thing I don't understand is how we get:

1) I don't quite get the thought process that goes into determining (1) before we start the proof.

2)$\frac{1}{n}\sum_{k=n_1}^{n}|x_k-x| < \frac{n + 1 -n_1}{n} \max_{k=n_1,\ldots, n} |x_k - x|$

Clarification on either of these steps would be greatly appreciated thank you.

3

There are 3 best solutions below

1
On BEST ANSWER

Let's discuss the intuitive idea behind $(1)$, as it is the intuitive idea behind the whole proof.

Consider a convergent sequence $x_n \to x$. Convergence of $x_n$ to $x$ means that eventually the sequence $x_n$ becomes, and stays, as close as you like to $x$. The behaviour of the sequence may be quite wild in the first finitely many steps, but after a certain point $N$, the sequence settles and becomes $\varepsilon$-close to $x$ forever after. The smaller the value of $\varepsilon$, the typically larger that $N$ must be, but the sequence eventually settles and becomes well-behaved, and stays well-behaved for the rest of its infinitely many points.

So, any "bad" behaviour is relegated to only a finite number of points. This is basically nothing in the face of the infinitude of well-behaved points that come after it. So, if we take the sequence of these averages, while the finitely many badly-behaved points will always contribute something to the average, they will eventually be far outweighed by the well-behaved points, and as such, the average will be pretty well-behaved as well.

Let's talk in slightly more practical terms. Let's fix some $\varepsilon > 0$, and naively consider the guaranteed $N \in \Bbb{N}$ such that $$n \ge N \implies |x_n - x| < \varepsilon.$$ Our badly-behaved points are $x_1, x_2, \ldots, x_{N-1}$. In a large enough average, we'll have \begin{align*} &\dfrac{x_1 + x_2 + \ldots + x_n}{n} - x \\ = \, &\dfrac{(x_1 - x) + (x_2 - x) + \ldots + (x_n - x)}{n} \\ = \, &\dfrac{\overbrace{(x_1 - x) + (x_2 - x) + \ldots + (x_{N-1} - x)}^{\text{Bad points}} + \overbrace{(x_N - x) + (x_{N+1} - x) + \ldots + (x_n - x)}^{\text{Each less than }\varepsilon}}{n}. \end{align*} We just need to make $n$ much bigger than $N$, so that the bad part of the average $$\dfrac{(x_1 - x) + (x_2 - x) + \ldots + (x_{N-1} - x)}{n}$$ becomes insignificantly tiny. This should be possible, since the numerator is really just a fixed sum, and we will be dividing it by larger and larger numbers $n$. So, we should be able to fix it so that $$\dfrac{|(x_1 - x) + (x_2 - x) + \ldots + (x_{N-1} - x)|}{n} < \varepsilon \tag{2}$$ for all $n \ge M$, for some $M$. On the other hand, the good points are all $\varepsilon$-small, so \begin{align*} &\dfrac{|(x_N - x) + (x_{N+1} - x) + \ldots + (x_n - x)|}{n} \\ \le \, &\dfrac{|x_N - x| + |x_{N+1} - x| + \ldots + |x_n - x|}{n} \\ < \, &\dfrac{\overbrace{\varepsilon + \varepsilon + \ldots + \varepsilon}^{\text{Exactly $n - N + 1$ terms}}}{n} = \frac{n - N + 1}{n} \varepsilon < \varepsilon. \tag{3} \end{align*} Thus, in total, $$\left|\dfrac{x_1 + x_2 + \ldots + x_n}{n} - x\right| \le \varepsilon + \varepsilon = 2\varepsilon,$$ whenever $n \ge M$ and $n > N$.


This approach is exactly the same as the proof you've quoted, but I've tried to talk through the thinking behind the steps. Where I have used $\varepsilon$, they have used $\varepsilon/2$, so that they obtain $< \varepsilon$ where I obtained $< 2\varepsilon$.

They also use $n_1$ instead of $N$. Hopefully you can see why they want an $n_2$ such that $(1)$ holds: essentially it's the same step as my $(2)$. What I called $M$, they called $n_2$.

As for your second question, this again corresponds to my step $(3)$. Essentially, they are counting the number of terms in the sum, and replacing each term with the largest of all the terms (which should yield something bigger than or equal to what you started with). I just replaced everything with $\varepsilon$, since I knew it was strictly larger than everything there. This is what they proceed to do: replace the maximum term with $\varepsilon$, which is larger than the maximum, increasing the sum again.

0
On

One way of understanding the thought process behind a proof is to enter your own 'proof puzzle playground' - turn over some puzzle pieces on your own and see how far you get.

So without worrying about precision, we can state that for $n \ge n_1$ the term $x_n$ are all 'close to' $x$. Well, the fun next step is to break the sigma sum into two pieces and see what happens. When $n \ge n_1$ we can write

$\frac{1}{n}\sum_{k=1}^{n}x_k = \frac{1}{n} \big ( \sum_{k=1}^{n_1-1}x_k + \sum_{k=n_1}^{n} x_k \big ) = \frac{1}{n}\sum_{k=1}^{n_1-1}x_k + \frac{1}{n}\sum_{k=n_1}^{n} x_k \tag{1}$

If we divide the constant $\sum_{k=1}^{n_1-1}x_k$ by $n$ and let $n \to +\infty$ the result will be $0$, so that takes care of the lhs.

For the rhs, assume (we can do whatever we want) that for $k \ge n_1$, $x_k = x$. Then

$ \frac{1}{n}\sum_{k=n_1}^{n} x_k = \frac{1}{n}\sum_{k=n_1}^{n} x = \frac{n - n_1 + 1}{n} \times x \tag{2}$

Again, letting $n \to +\infty$ will make you happy - the rhs goes to $x$,

Interesting, we can replace $x$ by an arbitrarily small $\varepsilon \ge 0$ and see that

$ \frac{1}{n}\sum_{k=n_1}^{n} \varepsilon = \frac{n - n_1 + 1}{n} \times \varepsilon \tag{3}$

approaches $\varepsilon$ as $n \to +\infty$.

With this under my belt I feel confident that I can work out the details of a formal proof. But why bother staring at the solution which might even be confusing with perhaps some typos/errors. Just write it out on you own sticking to the general format in your book.

0
On

The main idea is that if the sequence converges to $x$, then the "residue" $|x_n-x|$ goes decreasing and can be bounded by any $\epsilon$ when you drop enough of the first terms (there is always an $N$ such that...).

Then taking the average of the first $N+m$ residues, you get the upper bound

$$\frac{R_N+m\epsilon}{N+m}$$ which can be made as close as you want to $\epsilon$.