Different ways to prove $\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$ (the Basel problem)

141.4k Views Asked by At

As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs.

I believe many of you know some nice proofs of this, can you please share it with us?

22

There are 22 best solutions below

7
On BEST ANSWER

OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9 (EDIT: ...which is actually the proof that I read in Aigner & Ziegler).

When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus $$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$ Note that $1/\tan^2 x = 1/\sin^2 x - 1$. Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$: $$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$ Denoting the sum on the right-hand side by $S_n$, we can write this as $$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$

Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with, $$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$ Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short, $$S_n = 4 S_{n-1} + 2.$$ Since $S_1=2$, the solution of this recurrence is $$S_n = \frac{2(4^n-1)}{3}.$$ (For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)

We now have $$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$ Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!

9
On

I have two favorite proofs. One is the last proof in Robin Chapman's collection; you really should take a look at it.

The other is a proof that generalizes to the evaluation of $\zeta(2n)$ for all $n$, although I'll do it "Euler-style" to shorten the presentation. The basic idea is that meromorphic functions have infinite partial fraction decompositions that generalize the partial fraction decompositions of rational functions.

The particular function we're interested in is $B(x) = \frac{x}{e^x - 1}$, the exponential generating function of the Bernoulli numbers $B_n$. $B$ is meromorphic with poles at $x = 2 \pi i n, n \in \mathbb{Z}$, and at these poles it has residue $2\pi i n$. It follows that we can write, a la Euler,

$$\frac{x}{e^x - 1} = \sum_{n \in \mathbb{Z}} \frac{2\pi i n}{x - 2 \pi i n} = \sum_{n \in \mathbb{Z}} - \left( \frac{1}{1 - \frac{x}{2\pi i n}} \right).$$

Now we can expand each of the terms on the RHS as a geometric series, again a la Euler, to obtain

$$\frac{x}{e^x - 1} = - \sum_{n \in \mathbb{Z}} \sum_{k \ge 0} \left( \frac{x}{2\pi i n} \right)^k = \sum_{k \ge 0} (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi )^{2n}} x^{2n}$$

because, after rearranging terms, the sum over odd powers cancels out and the sum over even powers doesn't. (This is one indication of why there is no known closed form for $\zeta(2n+1)$.) Equating terms on both sides, it follows that

$$\frac{1}{(2n)!} B_{2n} = (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi)^{2n}}$$

or

$$\zeta(2n) = (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n}}{2(2n)!}$$

as desired. To compute $\zeta(2)$ it suffices to compute that $B_2 = \frac{1}{6}$, which then gives the usual answer.

9
On

We can use the function $f(x)=x^{2}$ with $-\pi \leq x\leq \pi $ and find its expansion into a trigonometric Fourier series

$$\dfrac{a_{0}}{2}+\sum_{n=1}^{\infty }(a_{n}\cos nx+b_{n}\sin nx),$$

which is periodic and converges to $f(x)$ in $[-\pi, \pi] $.

Observing that $f(x)$ is even, it is enough to determine the coefficients

$$a_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }f(x)\cos nx\;dx\qquad n=0,1,2,3,...,$$

because

$$b_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }f(x)\sin nx\;dx=0\qquad n=1,2,3,... .$$

For $n=0$ we have

$$a_{0}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }x^{2}dx=\dfrac{2}{\pi }\int_{0}^{\pi }x^{2}dx=\dfrac{2\pi ^{2}}{3}.$$

And for $n=1,2,3,...$ we get

$$a_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }x^{2}\cos nx\;dx$$

$$=\dfrac{2}{\pi }\int_{0}^{\pi }x^{2}\cos nx\;dx=\dfrac{2}{\pi }\times \dfrac{ 2\pi }{n^{2}}(-1)^{n}=(-1)^{n}\dfrac{4}{n^{2}},$$

because

$$\int x^2\cos nx\;dx=\dfrac{2x}{n^{2}}\cos nx+\left( \frac{x^{2}}{ n}-\dfrac{2}{n^{3}}\right) \sin nx.$$

Thus

$$f(x)=\dfrac{\pi ^{2}}{3}+\sum_{n=1}^{\infty }\left( (-1)^{n}\dfrac{4}{n^{2}} \cos nx\right) .$$

Since $f(\pi )=\pi ^{2}$, we obtain

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+\sum_{n=1}^{\infty }\left( (-1)^{n}\dfrac{4}{ n^{2}}\cos \left( n\pi \right) \right) $$

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+4\sum_{n=1}^{\infty }\left( (-1)^{n}(-1)^{n} \dfrac{1}{n^{2}}\right) $$

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+4\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}.$$

Therefore

$$\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}=\dfrac{\pi ^{2}}{4}-\dfrac{\pi ^{2}}{12}= \dfrac{\pi ^{2}}{6}$$


Second method (available on-line a few years ago) by Eric Rowland. From

$$\log (1-t)=-\sum_{n=1}^{\infty}\dfrac{t^n}{n}$$

and making the substitution $t=e^{ix}$ one gets the series expansion

$$w=\text{Log}(1-e^{ix})=-\sum_{n=1}^{\infty }\dfrac{e^{inx}}{n}=-\sum_{n=1}^{ \infty }\dfrac{1}{n}\cos nx-i\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx,$$

whose radius of convergence is $1$. Now if we take the imaginary part of both sides, the RHS becomes

$$\Im w=-\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx,$$

and the LHS

$$\Im w=\arg \left( 1-\cos x-i\sin x\right) =\arctan \dfrac{-\sin x}{ 1-\cos x}.$$

Since

$$\arctan \dfrac{-\sin x}{1-\cos x}=-\arctan \dfrac{2\sin \dfrac{x}{2}\cdot \cos \dfrac{x}{2}}{2\sin ^{2}\dfrac{x}{2}}$$

$$=-\arctan \cot \dfrac{x}{2}=-\arctan \tan \left( \dfrac{\pi }{2}-\dfrac{x}{2} \right) =\dfrac{x}{2}-\dfrac{\pi }{2},$$

the following expansion holds

$$\dfrac{\pi }{2}-\frac{x}{2}=\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx.\qquad (\ast )$$

Integrating the identity $(\ast )$, we obtain

$$\dfrac{\pi }{2}x-\dfrac{x^{2}}{4}+C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}\cos nx.\qquad (\ast \ast )$$

Setting $x=0$, we get the relation between $C$ and $\zeta (2)$

$$C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}=-\zeta (2).$$

And for $x=\pi $, since

$$\zeta (2)=2\sum_{n=1}^{\infty }\dfrac{(-1)^{n-1}}{n^{2}},$$

we deduce

$$\dfrac{\pi ^{2}}{4}+C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}\cos n\pi =\sum_{n=1}^{\infty }\dfrac{(-1)^{n-1}}{n^{2}}=\dfrac{1}{2}\zeta (2)=-\dfrac{1}{ 2}C.$$

Solving for $C$

$$C=-\dfrac{\pi ^{2}}{6},$$

we thus prove

$$\zeta (2)=\dfrac{\pi ^{2}}{6}.$$

Note: this 2nd method can generate all the zeta values $\zeta (2n)$ by integrating repeatedly $(\ast\ast )$. This is the reason why I appreciate it. Unfortunately it does not work for $\zeta (2n+1)$.

Note also the $$C=-\dfrac{\pi ^{2}}{6}$$ can be obtained by integrating $(\ast\ast )$ and substitute $$x=0,x=\pi$$ respectively.

5
On

Here is one more nice proof, I learned it from Grisha Mikhalkin:

Lemma: Let $Z$ be a complex curve in $\mathbb{C}^2$. Let $R(Z) \subset \mathbb{R}^2$ be the projection of $Z$ onto its real parts and $I(Z)$ the projection onto its complex parts. If these projections are both one to one, then the area of $R(Z)$ is equal to the area of $I(Z)$.

Proof: There is an obvious map from $R(Z)$ to $I(Z)$, given by lifting $(x_1, x_2) \in R(Z)$ to $(x_1+i y_1, x_2 + i y_2) \in Z$, and then projecting to $(y_1, y_2) \in I(Z)$. We must prove this map has Jacobian $1$. WLOG, translate $(x_1, y_1, x_2, y_2)$ to $(0,0,0,0)$ and let $Z$ obey $\partial z_2/\partial z_1 = a+bi$ near $(0,0)$. To first order, we have $x_2 = a x_1 - b y_1$ and $y_2 = a y_1 + b x_1$. So $y_1 = (a/b) x_1 - (1/b) x_2$ and $y_2 = (a^2 + b^2)/b x_1 - (a/b) x_2$. So the derivative of $(x_1, x_2) \mapsto (y_1, y_2)$ is $\left( \begin{smallmatrix} a/b & - 1/b \\ (a^2 + b^2)/b & -a/b \end{smallmatrix} \right)$ and the Jacobian is $1$. QED

Now, consider the curve $e^{-z_1} + e^{-z_2} = 1$, where $z_1$ and $z_2$ obey the following inequalities: $x_1 \geq 0$, $x_2 \geq 0$, $-\pi \leq y_1 \leq 0$ and $0 \leq y_2 \leq \pi$.

Given a point on $e^{-z_1} + e^{-z_2} = 1$, consider the triangle with vertices at $0$, $e^{-z_1}$ and $e^{-z_1} + e^{-z_2} = 1$. The inequalities on the $y$'s states that the triangle should lie above the real axis; the inequalities on the $x$'s state the horizontal base should be the longest side.

Projecting onto the $x$ coordinates, we see that the triangle exists if and only if the triangle inequality $e^{-x_1} + e^{-x_2} \geq 1$ is obeyed. So $R(Z)$ is the region under the curve $x_2 = - \log(1-e^{-x_1})$. The area under this curve is $$\int_{0}^{\infty} - \log(1-e^{-x}) dx = \int_{0}^{\infty} \sum \frac{e^{-kx}}{k} dx = \sum \frac{1}{k^2}.$$

Now, project onto the $y$ coordinates. Set $(y_1, y_2) = (-\theta_1, \theta_2)$ for convenience, so the angles of the triangle are $(\theta_1, \theta_2, \pi - \theta_1 - \theta_2)$. The largest angle of a triangle is opposite the largest side, so we want $\theta_1$, $\theta_2 \leq \pi - \theta_1 - \theta_2$, plus the obvious inequalities $\theta_1$, $\theta_2 \geq 0$. So $I(Z)$ is the quadrilateral with vertices at $(0,0)$, $(0, \pi/2)$, $(\pi/3, \pi/3)$ and $(\pi/2, 0)$ and, by elementary geometry, this has area $\pi^2/6$.

12
On

Here is an other one which is more or less what Euler did in one of his proofs.

The function $\sin x$ where $x\in\mathbb{R}$ is zero exactly at $x=n\pi$ for each integer $n$. If we factorized it as an infinite product we get

$$\sin x = \cdots\left(1+\frac{x}{3\pi}\right)\left(1+\frac{x}{2\pi}\right)\left(1+\frac{x}{\pi}\right)x\left(1-\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1-\frac{x}{3\pi}\right)\cdots =$$ $$= x\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{2^2\pi^2}\right)\left(1-\frac{x^2}{3^2\pi^2}\right)\cdots\quad.$$

We can also represent $\sin x$ as a Taylor series at $x=0$:

$$\sin x = x - \frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots\quad.$$

Multiplying the product and identifying the coefficient of $x^3$ we see that

$$\frac{x^3}{3!}=x\left(\frac{x^2}{\pi^2} + \frac{x^2}{2^2\pi^2}+ \frac{x^2}{3^2\pi^2}+\cdots\right)=x^3\sum_{n=1}^{\infty}\frac{1}{n^2\pi^2}$$ or $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Here are two interesting links:

7
On

This is not really an answer, but rather a long comment prompted by David Speyer's answer. The proof that David gives seems to be the one in How to compute $\sum 1/n^2$ by solving triangles by Mikael Passare, although that paper uses a slightly different way of seeing that the area of the region $U_0$ (in Passare's notation) bounded by the positive axes and the curve $e^{-x}+e^{-y}=1$, $$\int_0^{\infty} -\ln(1-e^{-x}) dx,$$ is equal to $\sum_{n\ge 1} \frac{1}{n^2}$.

This brings me to what I really wanted to mention, namely another curious way to see why $U_0$ has that area; I learned this from Johan Wästlund. Consider the region $D_N$ illustrated below for $N=8$:

A shape with area = sum of reciprocal squares

Although it's not immediately obvious, the area of $D_N$ is $\sum_{n=1}^N \frac{1}{n^2}$. Proof: The area of $D_1$ is 1. To get from $D_N$ to $D_{N+1}$ one removes the boxes along the top diagonal, and adds a new leftmost column of rectangles of width $1/(N+1)$ and heights $1/1,1/2,\ldots,1/N$, plus a new bottom row which is the "transpose" of the new column, plus a square of side $1/(N+1)$ in the bottom left corner. The $k$th rectangle from the top in the new column and the $k$th rectangle from the left in the new row (not counting the square) have a combined area which exactly matches the $k$th box in the removed diagonal: $$ \frac{1}{k} \frac{1}{N+1} + \frac{1}{N+1} \frac{1}{N+1-k} = \frac{1}{k} \frac{1}{N+1-k}. $$ Thus the area added in the process is just that of the square, $1/(N+1)^2$. Q.E.D.

(Apparently this shape somehow comes up in connection with the "random assignment problem", where there's an expected value of something which turns out to be $\sum_{n=1}^N \frac{1}{n^2}$.)

Now place $D_N$ in the first quadrant, with the lower left corner at the origin. Letting $N\to\infty$ gives nothing but the region $U_0$: for large $N$ and for $0<\alpha<1$, the upper corner of column number $\lceil \alpha N \rceil$ in $D_N$ lies at $$ (x,y) = \left( \sum_{n=\lceil (1-\alpha) N \rceil}^N \frac{1}{n}, \sum_{n=\lceil \alpha N \rceil}^N \frac{1}{n} \right) \sim \left(\ln\frac{1}{1-\alpha}, \ln\frac{1}{\alpha}\right),$$ hence (in the limit) on the curve $e^{-x}+e^{-y}=1$.

0
On

Another variation. We make use of the following identity (proved at the bottom of this note):

$$\sum_{k=1}^n \cot^2 \left( \frac {2k-1}{2n} \frac{\pi}{2} \right) = 2n^2 – n. \quad (1)$$

Now $1/\theta > \cot \theta > 1/\theta - \theta/3 > 0$ for $0< \theta< \pi/2 < \sqrt{3}$ and so $$ 1/\theta^2 – 2/3 < \cot^2 \theta < 1/\theta^2. \quad (2)$$

With $\theta_k = (2k-1)\pi/4n,$ summing the inequalities $(2)$ from $k=1$ to $n$ we obtain

$$2n^2 – n < \sum_{k=1}^n \left( \frac{2n}{2k-1}\frac{2}{\pi} \right)^2 < 2n^2 – n + 2n/3.$$

Hence

$$\frac{\pi^2}{16}\frac{2n^2-n}{n^2} < \sum_{k=1}^n \frac{1}{(2k-1)^2} < \frac{\pi^2}{16}\frac{2n^2-n/3}{n^2}.$$

Taking the limit as $n \rightarrow \infty$ we obtain

$$ \sum_{k=1}^\infty \frac{1}{(2k-1)^2} = \frac{\pi^2}{8},$$

from which the result for $\sum_{k=1}^\infty 1/k^2$ follows easily.

To prove $(1)$ we note that

$$ \cos 2n\theta = \text{Re}(\cos\theta + i \sin\theta)^{2n} = \sum_{k=0}^n (-1)^k {2n \choose 2k}\cos^{2n-2k}\theta\sin^{2k}\theta.$$

Therefore

$$\frac{\cos 2n\theta}{\sin^{2n}\theta} = \sum_{k=0}^n (-1)^k {2n \choose 2k}\cot^{2n-2k}\theta.$$

And so setting $x = \cot^2\theta$ we note that

$$f(x) = \sum_{k=0}^n (-1)^k {2n \choose 2k}x^{n-k}$$

has roots $x_j = \cot^2 (2j-1)\pi/4n,$ for $j=1,2,\ldots,n,$ from which $(1)$ follows since ${2n \choose 2n-2} = 2n^2-n.$

1
On

Here is a complex-analytic proof.

For $z\in D=\mathbb{C}\backslash${$0,1$}, let

$$R(z)=\sum\frac{1}{\log^2 z}$$

where the sum is taken over all branches of the logarithm. Each point in $D$ has a neighbourhood on which the branches of $\log(z)$ are analytic. Since the series converges uniformly away from $z=1$, $R(z)$ is analytic on $D$.

Now a few observations:

(i) Each term of the series tends to $0$ as $z\to0$. Thanks to the uniform convergence this implies that the singularity at $z=0$ is removable and we can set $R(0)=0$.

(ii) The only singularity of $R$ is a double pole at $z=1$ due to the contribution of the principal branch of $\log z$. Moreover, $\lim_{z\to1}(z-1)^2R(z)=1$.

(iii) $R(1/z)=R(z)$.

By (i) and (iii) $R$ is meromorphic on the extended complex plane, therefore it is rational. By (ii) the denominator of $R(z)$ is $(z-1)^2$. Since $R(0)=R(\infty)=0$, the numerator has the form $az$. Then (ii) implies $a=1$, so that $$R(z)=\frac{z}{(z-1)^2}.$$

Now, setting $z=e^{2\pi i w}$ yields $$\sum\limits_{n=-\infty}^{\infty}\frac{1}{(w-n)^2}=\frac{\pi^2}{\sin^2(\pi w)}$$ which implies that $$\sum\limits_{k=0}^{\infty}\frac{1}{(2k+1)^2}=\frac{\pi^2}{8},$$ and the identity $\zeta(2)=\pi^2/6$ follows.

The proof is due to T. Marshall (American Mathematical Monthly, Vol. 117(4), 2010, P. 352).

5
On

Define the following series for $ x > 0 $

$$\frac{\sin x}{x} = 1 - \frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\cdots\quad.$$

Now substitute $ x = \sqrt{y}\ $ to arrive at

$$\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 1 - \frac{y}{3!}+\frac{y^2}{5!}-\frac{y^3}{7!}+\cdots\quad.$$

if we find the roots of $\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 0 $ we find that

$ y = n^2\pi^2\ $ for $ n \neq 0 $ and $ n $ in the integers

With all of this in mind, recall that for a polynomial

$ P(x) = a_{n}x^n + a_{n-1}x^{n-1} +\cdots+a_{1}x + a_{0} $ with roots $ r_{1}, r_{2}, \cdots , r_{n} $

$$\frac{1}{r_{1}} + \frac{1}{r_{2}} + \cdots + \frac{1}{r_{n}} = -\frac{a_{1}}{a_{0}}$$

Treating the above series for $ \frac{\sin \sqrt{y}\ }{\sqrt{y}\ } $ as polynomial we see that

$$\frac{1}{1^2\pi^2} + \frac{1}{2^2\pi^2} + \frac{1}{3^2\pi^2} + \cdots = -\frac{-\frac{1}{3!}}{1}$$

then multiplying both sides by $ \pi^2 $ gives the desired series.

$$\frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6}$$

0
On

The most recent issue of The American Mathematical Monthly (August-September 2011, pp. 641-643) has a new proof by Luigi Pace based on elementary probability. Here's the argument.

Let $X_1$ and $X_2$ be independent, identically distributed standard half-Cauchy random variables. Thus their common pdf is $p(x) = \frac{2}{\pi (1+x^2)}$ for $x > 0$.

Let $Y = X_1/X_2$. Then the pdf of $Y$ is, for $y > 0$, $$p_Y(y) = \int_0^{\infty} x p_{X_1} (xy) p_{X_2}(x) dx = \frac{4}{\pi^2} \int_0^\infty \frac{x}{(1+x^2 y^2)(1+x^2)}dx$$ $$=\frac{2}{\pi^2 (y^2-1)} \left[\log \left( \frac{1+x^2 y^2}{1+x^2}\right) \right]_{x=0}^{\infty} = \frac{2}{\pi^2} \frac{\log(y^2)}{y^2-1} = \frac{4}{\pi^2} \frac{\log(y)}{y^2-1}.$$

Since $X_1$ and $X_2$ are equally likely to be the larger of the two, we have $P(Y < 1) = 1/2$. Thus $$\frac{1}{2} = \int_0^1 \frac{4}{\pi^2} \frac{\log(y)}{y^2-1} dy.$$ This is equivalent to $$\frac{\pi^2}{8} = \int_0^1 \frac{-\log(y)}{1-y^2} dy = -\int_0^1 \log(y) (1+y^2+y^4 + \cdots) dy = \sum_{k=0}^\infty \frac{1}{(2k+1)^2},$$ which, as others have pointed out, implies $\zeta(2) = \pi^2/6$.

3
On

At risk of contravening group etiquette w.r.t. old questions, I'm going to take this opportunity to post my own version. I don't see it in a transparent form in any of the other posts or in Robin Chapman's article, so I invite anyone to point out the correspondence if it's there. I like this argument because it's physical and can be followed without mathematical formalism.

We start by assuming the well-known series for $\pi/4$ in alternating odd fractions. We can recognize it as the sum of the Fourier series of the square wave, evaluated at the origin:

$\cos(x) - \cos(3x)/3 + \cos(5x)/5 ...$

It is easily argued on physical grounds that this adds up to a square wave; and that the height of the wave is pi/4 follows from the alternating sequence already mentioned. Now we are going to interpret this wave as an electric current flowing through a resistor. There are two ways of calculating the power and they must agree. First, we can just take square of the amplitude; in the case of this square wave, this is obviously a constant and it is just $\,\,\pi^2/16$. The other way is to add up the power of the sinusoidal components. These are the squares of the individual amplitudes:

$1 + 1/9 + 1/25 .... = (?)\, \pi^2/16 \,\,??$

No, not quite; I've been a little sloppy and neglected to mention that when calculating the power of a sine wave, you use its RMS amplitude and not its peak amplitude. This introduces a factor of two; so in fact the series as written adds up to $\,\pi^2/8.$ This isn't quite what we want; remember we've just added up the odd fractions. But the even fractions contribute in a rather picturesque way; it's easy to group them by powers of two into a geometric sum leading to the desired result of $\,\,\pi^2/6.$

0
On

In response to a request here: Compute $\oint z^{-2k} \cot (\pi z) dz$ where the integral is taken around a square of side $2N+1$. Routine estimates show that the integral goes to $0$ as $N \to \infty$.

Now, let's compute the integral by residues. At $z=0$, the residue is $\pi^{2k-1} q$, where $q$ is some rational number coming from the power series for $\cot$. For example, if $k=1$, then we get $- \pi/3$.

At $m \pi$, for $m \neq 0$, the residue is $z^{-2k} \pi^{-1}$. So $$\pi^{-1} \lim_{N \to \infty} \sum_{-N \leq m \leq N\ m \neq 0} m^{-2k} + \pi^{2k-1} q=0$$ or $$\sum_{m=1}^{\infty} m^{-2k} = -\pi^{2k} q/2$$ as desired. In particular, $\sum m^{-2} = - (\pi^2/3)/2 = \pi^2/6$.

Common variants: We can replace $\cot$ with $\tan$, with $1/(e^{2 \pi i z}-1)$, or with similar formulas.

This is reminiscent of Qiaochu's proof but, rather than actually establishing the relation $\pi^{-1} \cot(\pi z) = \sum (z-n)^{-1}$, one simply establishes that both sides contribute the same residues to a certain integral.

1
On

I'll post the one I know since it is Euler's, and is quite easy and stays in $\mathbb{R}$. (I'm guessing Euler didn't have tools like residues back then).

Let

$$s = {\sin ^{ - 1}}x$$

Then

$$\int\limits_0^{\frac{\pi }{2}} {sds} = \frac{{{\pi ^2}}}{8}$$

But then

$$\int\limits_0^1 {\frac{{{{\sin }^{ - 1}}x}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{{\pi ^2}}}{8}$$

Since

$${\sin ^{ - 1}}x = \int {\frac{{dx}}{{\sqrt {1 - {x^2}} }}} = x + \frac{1}{2}\frac{{{x^3}}}{3} + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{{{x^5}}}{5} + \frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}\frac{{{x^7}}}{7} + \cdots $$

We have

$$\int\limits_0^1 {\left\{ {\frac{{dx}}{{\sqrt {1 - {x^2}} }}\int {\frac{{dx}}{{\sqrt {1 - {x^2}} }}} } \right\}} = \int\limits_0^1 {\left\{ {x + \frac{1}{2}\frac{{{x^3}}}{3}\frac{{dx}}{{\sqrt {1 - {x^2}} }} + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{{{x^5}}}{5}\frac{{dx}}{{\sqrt {1 - {x^2}} }} + \cdots } \right\}} $$

But

$$\int\limits_0^1 {\frac{{{x^{2n + 1}}}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{2n}}{{2n + 1}}\int\limits_0^1 {\frac{{{x^{2n - 1}}}}{{\sqrt {1 - {x^2}} }}dx} $$

which yields

$$\int\limits_0^1 {\frac{{{x^{2n + 1}}}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{\left( {2n} \right)!!}}{{\left( {2n + 1} \right)!!}}$$

since all powers are odd.

This ultimately produces:

$$\frac{{{\pi ^2}}}{8} = 1 + \frac{1}{2}\frac{1}{3}\left( {\frac{2}{3}} \right) + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{1}{5}\left( {\frac{{2 \cdot 4}}{{3 \cdot 5}}} \right) + \frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}\frac{1}{7}\left( {\frac{{2 \cdot 4 \cdot 6}}{{3 \cdot 5 \cdot 7}}} \right) \cdots $$

$$\frac{{{\pi ^2}}}{8} = 1 + \frac{1}{{{3^2}}} + \frac{1}{{{5^2}}} + \frac{1}{{{7^2}}} + \cdots $$

Let

$$1 + \frac{1}{{{2^2}}} + \frac{1}{{{3^2}}} + \frac{1}{{{4^2}}} + \cdots = \omega $$

Then

$$\frac{1}{{{2^2}}} + \frac{1}{{{4^2}}} + \frac{1}{{{6^2}}} + \frac{1}{{{8^2}}} + \cdots = \frac{\omega }{4}$$

Which means

$$\frac{\omega }{4} + \frac{{{\pi ^2}}}{8} = \omega $$

or

$$\omega = \frac{{{\pi ^2}}}{6}$$

0
On

I like this one:

Let $f\in Lip(S^{1})$, where $Lip(S^{1})$ is the space of Lipschitz functions on $S^{1}$. So its well defined the number for $k\in \mathbb{Z}$ (called Fourier series of $f$) $$\hat{f}(k)=\frac{1}{2\pi}\int \hat{f}(\theta)e^{-ik\theta}d\theta.$$

By the inversion formula, we have $$f(\theta)=\sum_{k\in\mathbb{Z}}\hat{f}(k)e^{ik\theta}.$$

Now take $f(\theta)=|\theta|$, $\theta\in [-\pi,\pi]$. Note that $f\in Lip(S^{1})$

We have $$ \hat{f}(k) = \left\{ \begin{array}{rl} \frac{\pi}{2} &\mbox{ if $k=0$} \\ 0 &\mbox{ if $|k|\neq 0$ and $|k|$ is even} \\ -\frac{2}{k^{2}\pi} &\mbox{if $|k|\neq 0$ and $|k|$ is odd} \end{array} \right. $$

Using the inversion formula, we have on $\theta=0$ that $$0=\sum_{k\in\mathbb{Z}}\hat{f}(k).$$

Then,

\begin{eqnarray} 0 &=& \frac{\pi}{2}-\sum_{k\in\mathbb{Z}\ |k|\ odd}\frac{2}{k^{2}\pi} \nonumber \\ &=& \frac{\pi}{2}-\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{4}{k^{2}\pi} \nonumber \\ \end{eqnarray}

This implies $$\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{k^{2}} =\frac{\pi^{2}}{8}$$

If we multiply the last equation by $\frac{1}{2^{2n}}$ with $n=0,1,2,...$ ,we get $$\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{(2^{n}k)^{2}} =\frac{\pi^{2}}{2^{2n}8}$$

Now $$\sum_{n=0,1,...}(\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{(2^{n}k)^{2}}) =\sum_{n=0,1,...}\frac{\pi^{2}}{2^{2n}8}$$

The sum in the left is equal to: $\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}$

The sum in the right is equal to :$\frac{\pi^{2}}{6}$

So we conclude: $$\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}=\frac{\pi^{2}}{6}$$

Note: This is problem 9, Page 208 from the boof of Michael Eugene Taylor - Partial Differential Equation Volume 1.

0
On

See evaluations of Riemann Zeta Function $\zeta(2)=\sum_{n=1}^\infty\frac{1}{n^2}$ in mathworld.wolfram.com and a solution by in D. P. Giesy in Mathematics Magazine:

D. P. Giesy, Still another elementary proof that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$, Math. Mag. 45 (1972) 148–149.

Unfortunately I did not get a link to this article. But there is a link to a note from Robin Chapman seems to me a variation of proof's Giesy.

4
On

Note that $$ \frac{\pi^2}{\sin^2\pi z}=\sum_{n=-\infty}^{\infty}\frac{1}{(z-n)^2} $$ from complex analysis and that both sides are analytic everywhere except $n=0,\pm 1,\pm 2,\cdots$. Then one can obtain $$ \frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}=\sum_{n=1}^{\infty}\frac{1}{(z-n)^2}+\sum_{n=1}^{\infty}\frac{1}{(z+n)^2}. $$ Now the right hand side is analytic at $z=0$ and hence $$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=2\sum_{n=1}^{\infty}\frac{1}{n^2}.$$ Note $$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=\frac{\pi^2}{3}.$$ Thus $$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}.$$

0
On

A short way to get the sum is to use Fourier's expansion of $x^2$ in $x\in(-\pi,\pi)$. Recall that Fourier's expansion of $f(x)$ is $$ \tilde{f}(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty(a_n\cos nx+b_n\sin nx), x\in(-\pi,\pi)$$ where $$ a_0=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\;dx, a_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\cos nx\; dx, b_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\sin nx\; dx, n=1,2,3,\cdots $$ and $$ \tilde{f}(x)=\frac{f(x-0)+f(x+0)}{2}. $$ Easy calculation shows $$ x^2=\frac{\pi^2}{3}+4\sum_{n=1}^\infty(-1)^n\frac{\cos nx}{n^2}, x\in[-\pi,\pi]. $$ Letting $x=\pi$ in both sides gives $$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Another way to get the sum is to use Parseval's Identity for Fourier's expansion of $x$ in $(-\pi,\pi)$. Recall that Parseval's Identity is $$ \int_{-\pi}^{\pi}|f(x)|^2dx=\frac{1}{2}a_0^2+\sum_{n=1}^\infty(a_n^2+b_n^2). $$ Note $$ x=2\sum_{n=1}^\infty(-1)^{n+1}\frac{\sin nx}{n}, x\in(-\pi,\pi). $$ Using Parseval's Identity gives $$ 4\sum_{n=1}^\infty\frac{1}{n^2}=\int_{-\pi}^{\pi}|x|^2dx$$ or $$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

0
On

Here's mine. I'm answering late, I know that, but I am still answering it.
We'll use the expansion of $\tanh^{-1}$: $$\frac{1}{2}\log\frac{1+y}{1-y}=\sum_{n\geq0}\frac{y^{2n+1}}{2n+1},\quad|y|<1$$ We start with this inequality:
$$\int_{-1}^{1}\int_{-1}^{1}\frac{1}{1+2xy+y^2}dy\,dx=\int_{-1}^{1}\frac{1}{1+2xy+y^2}dx\,dy$$ The LHS of this equality gives: $$\int_{-1}^{1}\int_{-1}^{1}\frac{1}{1+2xy+y^2}dy\,dx=\int_{-1}^{1}\frac{\arctan \frac{x+y}{\sqrt{1-x^2}}}{\sqrt{1-x^2}}dx\Biggr|_{y=-1}^{y=1}\\ \quad\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad=\int_{-1}^{1}\frac{\pi}{2\sqrt{1-x^2}}dx=\frac{\pi^2}{2}$$ The RHS of the former equality yields: \begin{align} \int_{-1}^{1}\int_{-1}^{1}\frac{1}{1+2xy+y^2}dy\,dx&=\int_{-1}^{1}\frac{\log(1+2xy+y^2)}{2y}dy\Biggr|_{x=-1}^{x=1}\\ &=\int_{-1}^{1}\frac{\log\frac{1+y}{1-y}}{y}dy\\ &=2\int_{-1}^{1}\sum_{n\geq0}\frac{y^{2n}}{2n+1}dy\\ &=4\sum_{n\geq0}\frac{1}{(2n+1)^2} \end{align} Hence, $$\sum_{r\geq0}\frac{1}{(2n+1)^2}=\frac{\pi^2}{8}$$ Now $$\frac{3}{4}\zeta(2)=\zeta(2)-\frac{1}{4}\zeta(2)=\sum_{n\geq 1}\frac{1}{n^2}=\sum_{m\geq1}\frac{1}{(2m)^2}=\sum_{r\geq0}\frac{1}{(2r+1)^2}=\frac{\pi^2}{8}$$ Solving this we get $$\zeta(2)=\frac{\pi^2}{6}$$ as desired. Source:https://www.emis.de/journals/GM/vol16nr4/ivan/ivan.pdf
Here are more proofs.

0
On

Since $\int_0^1 \frac{dx}{1+x^2}=\frac{\pi}{4}$, we have

$$\frac{\pi^2}{16}=\int_0^1\int_0^1\frac{dydx}{(1+x^2)(1+y^2)}\overset{t=xy}{=}\int_0^1\int_0^x\frac{dtdx}{x(1+x^2)(1+t^2/x^2)}$$

$$=\frac12\int_0^1\int_t^1\frac{dxdt}{x(1+x^2)(1+t^2/x^2)}\overset{x^2\to x}{=}\frac12\int_0^1\left(\int_{t^2}^1\frac{dx}{(1+x)(x+t^2)}\right)dt$$

$$=-\frac12\int_0^1\frac{\ln\left(\frac{4t^2}{(1+t^2)^2}\right)}{1-t^2}dt\overset{t=\frac{1-x}{1+x}}{=}-\frac12\int_0^1\frac{\ln\left(\frac{1-x^2}{1+x^2}\right)}{x}dx$$

$$\overset{x^2\to x}{=}-\frac14\int_0^1\frac{\ln\left(\frac{1-x}{1+x}\right)}{x}dx=-\frac14\int_0^1\frac{\ln\left(\frac{(1-x)^2}{1-x^2}\right)}{x}dx$$

$$=-\frac12\int_0^1\frac{\ln(1-x)}{x}dx+\frac14\underbrace{\int_0^1\frac{\ln(1-x^2)}{x}dx}_{x^2\to x}$$

$$=-\frac38\int_0^1\frac{\ln(1-x)}{x}dx\Longrightarrow \int_0^1\frac{-\ln(1-x)}{x}dx=\frac{\pi^2}{6}$$


Remark:

This solution can be considered a proof that $\zeta(2)=\frac{\pi^2}{6}$ as we have $\int_0^1\frac{-\ln(1-x)}{x}dx=\text{Li}_2(x)|_0^1=\text{Li}_2(1)=\zeta(2)$

4
On

The following proof is by Khalaf Ruhemi ( he is not a MSE member)

By partial fraction decomposition, we have $$\frac{y}{(1+y^2)(y^2+x^2)}=\frac{1}{x^2-1}\left(\frac{y}{1+y^2}-\frac{y}{y^2+x^2}\right).$$ Integrate both sides from $y=0$ to $y=\infty$, \begin{gather*} \int_0^\infty\frac{y}{(1+y^2)(y^2+x^2)}\mathrm{d}y=\frac{1}{x^2-1}\int_0^\infty\left[\frac{y}{1+y^2}-\frac{y}{y^2+x^2}\right]\mathrm{d}y\\ =\frac{1}{x^2-1}\left[\frac12\ln(1+y^2)-\frac12\ln(y^2+x^2)\right]_0^\infty=\frac{1}{2(x^2-1)}\left[\ln\left(\frac{1+y^2}{y^2+x^2}\right)\right]_0^\infty\\ =\frac{1}{2(x^2-1)}\left[\ln(1)-\ln\left(\frac{1}{x^2}\right)\right]=\frac{1}{2(x^2-1)}\left[2\ln(x)\right]=\frac{\ln(x)}{x^2-1}. \end{gather*} Next, integrate both sides from $x=0$ to $x=\infty$ \begin{gather*} \int_0^\infty\frac{\ln(x)}{x^2-1}\mathrm{d}x=\int_0^\infty\int_0^\infty\frac{y}{(1+y^2)(y^2+x^2)}\mathrm{d}y\,\mathrm{d}x\\ \{\text{change the order of integration}\}\\ =\int_0^\infty\frac{1}{1+y^2}\left[\int_0^\infty\frac{y\,\mathrm{d}x}{y^2+x^2}\right]\mathrm{d}y\\ =\int_0^\infty\frac{1}{1+y^2}\left[\arctan\left(\frac{x}{y}\right)\right]_0^\infty dy=\int_0^\infty\frac{1}{1+y^2}\left[\frac{\pi}{2}-0\right] \mathrm{d}y\\ =\frac{\pi}{2}\int_0^\infty\frac{1}{1+y^2} dy=\frac{\pi}{2}\arctan(y)\bigg|_0^\infty=\frac{\pi}{2}\cdot\frac{\pi}{2}=\frac{\pi^2}{4}. \end{gather*} Thus, \begin{gather*} \frac{\pi^2}{4}=\int_0^\infty\frac{\ln(x)}{x^2-1}\mathrm{d}x=\left(\int_0^1+\int_1^\infty\right)\frac{\ln(x)}{x^2-1}\mathrm{d}x\\ =\int_0^1\frac{\ln(x)}{x^2-1}\mathrm{d}x+\underbrace{\int_1^\infty\frac{\ln(x)}{x^2-1}\mathrm{d}x}_{x\to1/x}\\ =2\int_0^1\frac{\ln(x)}{x^2-1}\mathrm{d}x=-\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x-\int_0^1\frac{\ln(x)}{1+x}\mathrm{d}x\\ \left\{\text{use $\frac{1}{1+x}=\frac{1}{1-x}-\frac{2x}{1-x^2}$ in the second integral}\right\}\\ =-2\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x+2\underbrace{\int_0^1\frac{x\ln(x)}{1-x^2}\mathrm{d}x}_{x^2\to x}\\ =-2\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x+\frac12\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x\\ =-\frac32\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x\overset{1-x\to y}{=}-\frac32\int_0^1\frac{\ln(1-y)}{y}\mathrm{d}y\\ \{\text{expand $\ln(1-y)$ in series}\}\\ =\frac32\sum_{n=1}^\infty \frac{1}{n}\int_0^1 y^{n-1}\mathrm{d}y=\frac32\sum_{n=1}^\infty\frac{1}{n^2}=\frac{3}{2}\zeta(2). \end{gather*} So we have $$\frac{\pi^2}{4}=\frac{3}{2}\zeta(2)\Longrightarrow \zeta(2)=\frac{\pi^2}{6}.$$

1
On

I thought I'd just add a rigorous, yet only slightly more involved version of @QiaochuYuan 's proof:

So the generating function of the Bernoulli numbers, given by $$ g(z) = \frac{z}{e^z - 1} $$ does have a partial fraction decomposition, but we have to divide by $z$ in order to make the pole sum converge, so $$ \frac{g(z)}{z} = \frac{1}{e^z - 1} = -\frac{1}{2} + \frac{1}{z} + \sum_{n=1}^\infty \frac{2z}{z^2 + (2πn)^2} $$ where we already collected the terms for each $n$ and $-n$ together. This is justified because the difference between $g(z)/z$ and the partial fraction decomposition is bounded and holomorphic and hence constant, and the limit may easily be computed at $0$. Near $0$ all of this converges absolutely, so that we are justified in permuting the order of summation in order to obtain the expression $$ \sum_{n=1}^\infty \frac{2 (-1)^k}{(2πn)^2(2πn)^{2k}} $$ for the $k+1$-st coefficient in the power series expansion of $g(z)/z$. Equating coefficients then yields the desired formula.

6
On

Looking through the answers I’m honestly surprised that no one has posted this method yet, so I guess I shall do it since I just recently did a write up on some stuff that included this in it.

MSE doesn’t have the cancel package smh


Consider the Basel Problem $$S=\sum^{\infty}_{n=1}\frac{1}{n^2}$$ we may note that the Laplace transform of $x$ gives a similar form to the series as follows $$\mathcal{L}\left\{x\right\}(n)=\frac{1}{n^2}=\int^{\infty}_{0}xe^{-nx}\text{ d}x,\qquad n>0$$ Since our sum violates no restrictions, we may substitute this integral into it giving \begin{align} S&=\sum^{\infty}_{n=1}\int^{\infty}_{0}xe^{-nx}\text{ d}x\\ &=\int^{\infty}_{0}x\sum^{\infty}_{n=1}e^{-nx}\text{ d}x\\ &=\int^{\infty}_0\frac{x\text{ d}x}{e^x-1}=I \end{align} Showing the integral and sum interchange is valid in $(2)$ is trivial since our integrand is always positive along the integration interval, and would simplify to the Basel Problem when the Fubini–Tonelli theorem is applied. $(3)$ on the other hand is a trivial geometric series.

How should we evaluate this integral? To do this, we will consider the rather counterintuitive function $$f(z)=\frac{z^2}{e^z-1}$$ This function has poles at every $z=2\pi i n$, $n\in\mathbb{Z}$, but notice that the "pole" at the origin actually is a removable singularity, which can be shown by taking the limit of the function as it goes to the origin. This means that when we set up our rectangular contour, we don't actually need to take the principal value to the origin through a circular indent, because the function is well behaved there and the integral along that circular indent would go to $0$ regardless.

Hence, consider the following contour

enter image description here

$$\oint_{\mathcal{C}}f(z)\text{ d}z=\int_B+\int_{\mathcal{R}}+\int_T+\int_r+\int_Lf(z)\text{ d}z=0$$ Parameterization of the contour gives \begin{alignat*}{5} B&:\text{ }z=x,\qquad &\text{d}z&=\text{d}x,\qquad &x&\in[0, R\,]\\ \mathcal{R}&:\text{ }z=R+iy,\qquad &\text{d}z&=i\text{ d}y,\qquad &y&\in[0, 2\pi]\\ T&:\text{ }z=2\pi i+x,\qquad &\text{d}z&=\text{d}x,\qquad &x&\in[R, \epsilon\,]\\ r&:\text{ }z=2\pi i+\epsilon e^{i\theta},\qquad &\text{d}z&=i\epsilon e^{i\theta}\text{ d}\theta,\qquad &y&\in\left[0, -\frac{\pi}{2}\right]\\ L&:\text{ }z=iy,\qquad &\text{d}z&=i\text{ d}y,\qquad &y&\in[2\pi-\epsilon, 0] \end{alignat*} Instead of evaluating these integrals one by one however, this time we will first begin by adding two of them and cancelling terms, which is the reason why we increased the numerator power by $1$ in our $f(z)$. We have it such that \begin{align} \lim_{R\to+\infty,\,\epsilon\to+0}\int_B+\int_Tf(z)\text{ d}z&=\lim_{R\to+\infty,\,\epsilon\to+0}\int^R_0\frac{x^2\text{ d}x}{e^x-1}+\int_R^{\epsilon}\frac{(2\pi i+x)^2\text{ d}x}{{e^{2\pi i}}e^x-1}\\ &=\int^{\infty}_0\frac{x^2\text{ d}x}{e^x-1}-\int_0^{\infty}\frac{(2\pi i+x)^2\text{ d}x}{e^x-1}\\ &={\int^{\infty}_0\frac{x^2\text{ d}x}{e^x-1}}-{\int^{\infty}_0\frac{x^2\text{ d}x}{e^x-1}}-\int^{\infty}_0\frac{4\pi i x\text{ d}x}{e^x-1}+\int^{\infty}_0\frac{4\pi^2\text{ d}x}{e^x-1}\\ &=-4\pi iI+\int^{\infty}_0\frac{4\pi^2\text{ d}x}{e^x-1} \end{align} Next, we have \begin{align} \left|\int_{\mathcal{R}}f(z)\text{ d}z\right|&\le\int^{2\pi}_0\frac{\left|R+iy\right|^2\cdot{|i|}\text{ d}y}{\left|\displaystyle e^R e^{iy}-1\right|}\\ &\le\int^{2\pi}_0\frac{R^2+y^2}{\left|\left|\displaystyle e^R\right|{\left|e^{iy}\right|}-|1|\right|}\text{ d}y\\ &=\int^{2\pi}_0\frac{R^2+y^2}{e^R-1}\text{ d}y=\frac{2\pi\left(4\pi^2+3R^2\right)}{3\left(e^R-1\right)} \end{align} So this integral is just $0$ since $$\frac{2\pi}{3}\lim_{R\to+\infty}\frac{4\pi^2+3R^2}{e^R-1}=0$$ The integral about $r$ gives $$\int^{-\frac{\pi}2}_0\frac{\left(2\pi i + \epsilon e^{i\theta}\right)^2}{{e^{2\pi i}}e^{\epsilon e^{i\theta}}-1}\cdot i\epsilon e^{i\theta}\text{ d}\theta$$ Fix a $-\frac{\pi}2\le\theta\le0$, and we can bound the denominator and then the whole integrand by a constant since the integral is over a finite interval, which allows us to swap the $\epsilon$ limit and the integral. Meanwhile, the limit itself can be solved by L'Hôpital's rule, which yields $$\lim_{\epsilon\to+0}\int^{-\frac{\pi}2}_0\frac{\left(2\pi i + \epsilon e^{i\theta}\right)^2}{e^{\epsilon e^{i\theta}}-1}\cdot i\epsilon e^{i\theta}\text{ d}\theta=\int^{-\frac{\pi}2}_0\lim_{\epsilon\to+0}\frac{\left(2\pi i + \epsilon e^{i\theta}\right)^2}{e^{\epsilon e^{i\theta}}-1}\cdot i\epsilon e^{i\theta}\text{ d}\theta=4i\pi^2\int^0_{-\frac{\pi}{2}}\text{d}\theta=2i\pi^3$$ We have one more integral to go, where we can see that $$\lim_{\epsilon\to+0}\int_Lf(z)\text{ d}z=\lim_{\epsilon\to+0}\int^{0}_{2\pi-\epsilon}\frac{-y^2\cdot i\text{ d}y}{e^{iy}-1}=\int^{2\pi}_{0}\frac{iy^2\text{ d}y}{e^{iy}-1}$$ All in all, we end up with $$\oint_{\mathcal{C}}f(z)\text{ d}z=\int_B+{\int_{\mathcal{R}}}+\int_T+\int_r+\int_Lf(z)\text{ d}z=-4\pi iI+\int^{\infty}_0\frac{2\pi^2\text{ d}x}{e^x-1}+2i\pi^3+\int^{2\pi}_0\frac{iy^2\text{ d}y}{e^{iy}-1}=0$$ Note that the two unsolved integrals that remain are divergent, and will actually cancel each other out if we take a principal value "over" the two integrals, but we do not need to deal with them. Instead, we can pass the equation through the imaginary part function and equate the results, which give $$-4\pi I+2\pi^3+\int^{2\pi}_0\Im\left(\frac{iy^2}{e^{iy}-1}\right)\text{ d}y=0$$ We can separate real and imaginary parts of the integrand of the last integral with some simple conjugate multiplication, and we would end up with $$-4\pi I+2\pi^3+\int^{2\pi}_0\frac{-y^2\text{ d}y}{2}=0$$ $$I=S=\frac{-2\pi^3+\frac{4}{3}\pi^3}{-4\pi}=\boxed{\frac{\pi^2}{6}}$$