Alternative way to solve a limit problem

1.3k Views Asked by At

$$ \lim _{n \rightarrow \infty} \frac{1}{1+n^{2}}+\frac{2}{2+n^{2}}+\cdots+\frac{n}{n+n^{2}} $$

I want to find the limit of this infinite series which I found in a book. The answer is $1/2$.

The solution to this limit was given by Sandwich/Squeeze Theorem, which was basically that the above function lies between: $$ \frac{1}{n+n^{2}}+\frac{2}{n+n^{2}}+\frac{3}{n+n^{2}}+\cdots+\frac{n}{n+n^{2}} $$ And, $$ \frac{1}{1+n^{2}}+\frac{2}{1+n^{2}}+\cdots+\frac{n}{1+n^{2}} $$ series and the limit of both of these series tend to $1/2$ as $n \to \infty$.

I fully understood the solution, but I find that this isn't something that naturally/intuitively comes to your mind. I mean we need to find two different series by trial and error, both of which need to converge to a single number.

Is there any different solution to this limit problem, like dividing by powers of n, or maybe telescoping sums?

6

There are 6 best solutions below

4
On BEST ANSWER

I want to argue how you could have found the solution given.

The natural thing you want to do with the fractions is add them, but they have different denominators. Now, you could try to start multiplying the terms to get a common denominator $\left (\frac{1(2+n^2)\cdots (n+n^2)}{(1+n^2)(2+n^2)\cdots (n+n^2)}+\cdots\right)$, and you might start off by doing so and then give up when you see that the answer is not going to be easily found this way.

So, let's ask, what is the closest sequence where the denominators are the same (so we can just add the fractions)?

Well, there's two answers: what if we take the denominator to be the first term's, $1+n^2$, and what if we take the denominator to be the last term's, $n+n^2$? It happens to be that the first case reduces each denominator (except the first), so it's an upper bound, and the second case is a lower bound.

There's no trial and error here: there's not really any other values you can choose. You could try a middle term, $k+n^2,1<k<n$, but I don't think it's a natural choice and you'll quickly see it can't easily be compared as greater or less than the initial series.

It's also a standard technique. Think about similar series where the denominator is changing e.g. $\frac{n^2}{1^2+n^3}+\frac{n^2}{2^2+n^3}+\cdots+\frac{n^2}{n^2+n^3}$. You can always replace each denominator by the smallest or the largest denominator in the series and compare what the two limits of those sequences are.

The method will not work in every case, as you might get different limits in some cases. However, no method will work in every case, and this sandwiching attempt is always a reasonable option to start with when you see a limit involving sums of terms with slow-growing denominators ($1+n^2,\dots,n+n^2$ are dominated by the $n^2$ term, so changing the values of $1,\dots,n$ is unlikely to change the value in the limit).

0
On

Note that \begin{align*}\sum_r \frac r{r+n^2}=\frac 1{n^2}\sum\frac r{1+\frac r {n^2}}&=\frac 1{n^2}\sum r(1-r/{n^2}+O(1/n^3))\\&=\frac 1{n^2}(\sum r-\frac 1{n^2}\sum r+(\sum r)O(\frac 1{n^3}))\to \frac 12 \end{align*}

Here, the identity $\sum r=\sum_{r=1}^nr=\frac {n(n+1)}2$ has been used.

The second equality is due to binomial theorem.

5
On

Uses series of integer powers.

Find $\lim_{n\to\infty} \sum_{k=1}^n \frac{k}{k+n^2}$.

This can be expressed as an absolutely convergent geometric series and rearranged as a sum of Faulhaber's Polynomials.

$\frac{k}{k+n^2}=1-\frac{n^2}{k+n^2}=1-\frac{1}{1+\frac{k}{n^2}}=1-(1-\frac{k}{n^2}+\frac{k^2}{n^4}-\frac{k^3}{n^6}+...)=\frac{k}{n^2}-\frac{k^2}{n^4}+...$

$\sum_{k=1}^n k^p=O(n^{p+1})$ by Faulhaber's Formulas.

So our sum is $\frac{1}{n^2}\frac{n(n+1)}{2}-\frac{1}{n^4}\frac{n(n+1)(2n+1)}{6}+...$ with the limit of the first term going to $1/2$ and the sum of the remaining terms going to $0$ by The Alternating Series Test.

0
On

$$S_n=\sum_{r=1}^n \frac r{r+n^2}=\sum_{r=1}^n \frac{\frac r{n^2}}{ 1+\frac{r}{n^2}}=\sum_{r=1}^n \frac{1+\frac r{n^2}-1}{ 1+\frac{r}{n^2}}=\sum_{r=1}^n 1-\sum_{r=1}^n \frac{1}{ 1+\frac{r}{n^2}}=n-\sum_{r=1}^n \frac{1}{ 1+\frac{r}{n^2}}$$ $$\sum_{r=1}^n \frac{1}{ 1+\frac{r}{n^2}}=n^2\Bigg[\sum_{r=1}^{n^2+n}\frac 1r -\sum_{r=1}^{n^2}\frac 1r\Bigg]$$ So, using harmonic numbers $$S_n=n+n^2 \left(H_{n^2}-H_{n^2+n}\right)$$ Using the asymptotics $$H_p=\log (p)+\gamma +\frac{1}{2 p}-\frac{1}{12 p^2}+\frac{1}{120 p^4}+O\left(\frac{1}{p^{6}}\right)$$ apply it twice and continue with Taylor series to simplify. $$H_{n^2}-H_{n^2+n}=-\frac{1}{n}+\frac{1}{2 n^2}+\frac{1}{6 n^3}-\frac{1}{4 n^4}+\frac{2}{15 n^5}+O\left(\frac{1}{n^{6}}\right)$$ $$S_n=\frac{1}{2}+\frac{1}{6 n}-\frac{1}{4 n^2}+\frac{2}{15 n^3}+O\left(\frac{1}{n^{4}}\right)$$ which is a quite good approximation. For example, $$S_{10}=\frac{3039003639041255}{5909102214621606}=0.514292$$ while the truncated series gives exactly $0.5143$

0
On

This answer uses the same idea as the answers of TurlocTheRed and Koro, but shows that one doesn't need infinite series or big-$O$ notation. Each denominator is extremely close to $n^2$, so let's compare explicitly: \begin{align*} \sum_{k=1}^n \frac{k}{k+n^2} &= \sum_{k=1}^n \frac{k}{n^2} - \sum_{k=1}^n \bigg( \frac{k}{n^2} - \frac{k}{k+n^2} \bigg) \\ &= \frac1{n^2} \sum_{k=1}^n k - \sum_{k=1}^n \frac{k^2}{n^2(k+n^2)}. \end{align*} By standard formulas, the first sum is exactly $\frac12(1+\frac1n)$; while the second sum is positive and bounded above by the sum $$ 0 < \sum_{k=1}^n \frac{k^2}{n^2(k+n^2)} < \sum_{k=1}^n \frac{k^2}{n^2(0+n^2)} = \frac1{n^4} \sum_{k=1}^n k^2 = \frac1{6n}\bigg(1+\frac1n\bigg)\bigg(2+\frac1n\bigg). $$ The desired limit now follows from the squeeze theorem.

0
On

A limit of such a sum can often be interpreted as a Riemann sum. So set $\frac1n=\Delta x$ and $x_k=\frac{k}{n}$, then $$ \frac{k}{k^2+k}=\frac{x_k\,Δx}{1+x_k\,Δx}. $$ The fraction $\hat x_k=\frac{x_k}{1+x_kΔx}$ can be seen as a point inside the interval $[x_{k-1}, x_k]$, so that indeed the given sum $$ \sum_{k=1}^n\hat x_k\,Δx $$ is a valid Riemann sum for $$ \int_0^1x\,dx=\frac12. $$ As this is indeed Riemann integrable, any sequence of Riemann sums converges to the integral value if the maximal step size goes to zero.