Why does $1+2+3+4+\dots=-\frac1{12}$ in a couple different ways?

1k Views Asked by At

$1+2+3+4+\dots$ is undefined when using regular summation. If you use either Ramanujan summation or Zeta function regularization, then $1+2+3+4+\dots=-\frac1{12}$. This article lists some over definitions of summation, and they all give the same result.

My question is, why do all these seemingly unrelated definitions all give the same, seemingly arbitrary value of $-\frac1{12}$ to these seemingly simple divergent series? Is there only underlying method that they all are based on?

2

There are 2 best solutions below

1
On

First note the Abel summation gives $\infty$.

Now I'd say your observed phenomenon is mainly because $(1-2^{1-s}) \zeta(s) = \eta(s)$ where $\eta(s)= \sum_{n=1}^\infty (-1)^{n+1} n^{-s}$ can be extended analytically to $\Re(s) > -K$ just by partial summation (since $\sum_{n=1}^N (-1)^{n+1} = \frac{1+(-1)^{N+1}}{2}$) and hence also by Abel summation.

Therefore $$\zeta(-1)(1-2^{2}) = \eta(-1) = \lim_{z \to 1^-} \sum_{n=1}^\infty (-1)^{n+1}n z^n=\lim_{z \to 1^-} z\frac{d}{dz}\frac{-z}{1+z} =\frac{1}{4}$$

and the same will happens for any regularization method compatible with the Abel summation on rational functions.

2
On

1. Introduction

These methods can all be reinterpreted as analytic continuation methods. Suppose that the summation of some function $f(x)$ is divergent. The partial sum:

$$S(N) = \sum_{k=1}^{N}f(k)\tag{1}$$

as a function of $N$ will then tend to infinity in some way. We can then consider replacing $f(x)$ by a function of two variables $f(x,t)$ such that $f(x,0) = f(x)$ and that this is an analytic function of $t$. The idea is then that for some range of the parameter $t$ the summation converges and that we can analytically continue the result of the summation to $t = 0$. So, if we put:

$$S(N,t) = \sum_{k=1}^{N}f(k,t)\tag{2}$$

Then $\lim_{N\to\infty} S(N,t)$ exists for $t$ in a certain subset of the complex plane. Suppose that we can perform an asymptotic expansion of $S(N,t)$ for large $N$ for arbitrary $t$. This expansion can then contain powers of $N$, logarithms etc.. For $t=0$ the leading asymptotic behavior will be divergent. If we then assume that the terms on this asymptotic expansion change continuously as a function of $t$, then what must happen is that the divergent terms for $t = 0$ must become terms that tend to zero or some other constant when $t$ lies in the domain where the summation converges, and no other divergent terms can appear there.

This means that the analytic continuation of the infinite summation to $t = 0$ should yield the constant term in the large $N$ expansion at $t = 0$. There is therefore no need to explicitly perform any analytic continuation, all you need to do is to perform a large $N$ expansion for (1) and extract the constant term from there. This is what Ramanujan summation boils down to.

We then also see that the results must be independent of the particular way the analytic continuation is performed. In this respect the computation using the zeta-function regularization is misleading as it suggests that the answer depends on using specifically that regularization.

2. Derivation of Ramanujan summation in the analytic continuation context

We can derive the Ramanujan summation method within the context of analytic continuation as follows. We split the summation (1):

$$S(N) = \sum_{k=1}^{p-1}f(k) + \sum_{k=p}^N f(k)$$

for some integer $p$, and then we apply the Euler-Maclaurin summation formula to the second summation:

$$S(N) = \sum_{k=1}^{p-1}f(k) + \int_p^{N}f(x) dx + \frac{1}{2}[f(p) + f(N)] + \sum_{r=1}^{\infty} \frac{B_{2r}}{(2r)!}\left[f^{(2r-1)}(N) - f^{(2r-1)}(p)\right] $$

Here the summation over $r$ is usually a divergent asymptotic expansion, it may be written more rigorously as a finite summation plus a remainder term. We can then write $\int_p^N f(x)dx$ in terms of the antiderivative $F$ of $f$ as $F(N) - F(p)$. If we also extend the summation over $k$ to $p$ and subtract $f(p)$, we get:

$$S(N) = \sum_{k=1}^p f(k) + A(N) - A(p)\tag{3}$$

where:

$$A(u) = F(u) + \frac{f(u)}{2} + \sum_{r=1}^{\infty}\frac{B_{2r}}{(2r)!}f^{(2r-1)}(u)\tag{4}$$

Then by the analytic continuation logic described above, we imagine that we could have introduced a parameter in the function $f(x)$, which for some range of the values of that parameter would have made the summation to infinity convergent. Then all the terms that are diverging in the limit of $N$ to infinity in (3) would tend to zero or some constant, and analytically continuing the result back to the value of the parameter that yields the original sum, would have the effect of setting all these diverging terms to zero. We can then do this directly in (3) as follows.

If we denote by $D(u)$ all the terms in $A(u)$ that we're going to set to zero in and c the constant term in there that we're going to keep, then we can write:

$$A(u) = D(u) + c + \mathcal{o}(1)$$

The constant $c$ can then only come from the antiderivative $F(u)$, because after analytic continuation to a domain where the summation is convergent, the summand must tend to zero, so all the other terms in $A(N)$ must tend to zero. Using that $S(N)$ does not depend on $p$ allows us to take the limit of $p$ to infinity in (3). We can then write:

$$S(N) = \lim_{p\to\infty}\left[\sum_{k=1}^{p}f(k) - D(p)\right]+ D(N) + \mathcal{o}(1)$$

Note that the constant term $c$ has dropped out. Deleting the terms in $D(N)$ per the analytic continuation argument and also the terms that tend to zero, gets us to the summation result of:

$$S = \lim_{p\to\infty}\left[\sum_{k=1}^{p}f(k) - D(p)\right]\tag{5}$$

In case of the summation over the integers, we have $D(p) = \frac{p^2}{2} + \frac{p}{2}+\frac{1}{12}$, which combined with the fact that $\sum_{k=1}^{p} = \frac{p^2}{2} + \frac{p}{2}$ yields the result that the summation is $-\frac{1}{12}$. An example where the zeta-function regularization fails is the case of the harmonic series. But (5) gives us the immediate result that the sum is given by Euler's constant.

3. Expression in terms of the partial sum

In case we have an explicit formula for the partial sum, the following summation formula for convergent series can also be used:

$$\sum_{k=1}^{\infty}f(k) = \int_p^{\infty}f(x) dx + \int_{p-1}^p S(x) dx\tag{6}$$

where $p$ is an arbitrary real or complex number and $S(x)$ is the analytically continued partial sum defined by analytically continuing the partial sum:

$$S(N) = \sum_{k=1}^N f(k)\tag{7}$$

from the integers to the complex plane using Carlson's theorem and imposing the conditions assumed in that theorem. It then follows that $f(x)$ also needs to be defined using Carlson's theorem if it is only specified for integer arguments and it then follows from (7) that $f(x)$ is given in terms of $S(x)$ via:

$$f(x) = S(x) - S(x-1)$$

To prove (6), we write:

$$ \sum_{k=1}^{\infty} f(k) = \lim_{x\to \infty} S(x)$$

The first integral on the r.h.s. of (6) can be written as:

$$\begin{split}\int_p^{\infty}f(x) dx &= \lim_{R\to\infty} \left(\int_p^{R}S(x)dx - \int_{p-1}^{R-1}S(x) dx\right)\\ &= \lim_{R\to\infty} \int_{R-1}^R S(x) dx - \int_{p-1}^p S(x) dx\end{split}$$

The integral from $R-1$ to $R$ of $S(x)$ obviously tends to the same limit for $R$ to infinity as $S(x)$ for $x$ to infinity. The last term in (6) cancels out the integral from $p-1$ to $p$ in the above expression.

In case of a divergent series, we can again set up an analytic continuation argument. We then assume that we could have inserted a parameter into $f(x)$ so that the summation would for some domain for that parameter, be convergent. This means that the integral $\int_p^{\infty}f(x) dx$ in (6) would be convergent there, therefore cutting off the integral at an upper limit of $R$ would introduce terms that depend on $R$ that tend to zero. But these terms don't tend to zero for the actual function $f(x)$. Therefore, defining the summation by analytic continuation of the summation amounts to setting these terms to zero by hand.

In case of the summation over the integers, taking $p=1$, we have for the first integral in (6): $\int_0^R x dx = \frac{R^2}{2}$. The analytically continued partial summation is $S(x) = \frac{1}{2} x (x+1)$. The last term of (6) is then: $\int_{-1}^0 S(x) dx = \frac{1}{6}- \frac{1}{4} = -\frac{1}{12}$. Therefore deleting the $\frac{R^2}{2}$ term per the analytic continuation argument, yields the value for the summation as $ -\frac{1}{12}$.

4. Evaluation of divergent summations using Ramanujan's master theorem

Using Ramanujan's master theorem, we can derive the following identity:

$$\int_0^{\infty}\frac{g(x)}{1+x}dx = -\sum_{k=0}^{\infty}\frac{dc_k}{dk}\tag{8}$$

where the $c_k$ are given by the series expansion coefficients of $g(x)$:

$$g(x) = \sum_{k=0}^{\infty}(-1)^k c_k x^k\tag{9}$$

and these coefficients are then analytically continued using Carlson's theorem. In (8) we assume that $g(x)$ has no singularities on the unit disk, otherwise the summation will diverge. To sum a series, we can then integrate minus the summand to obtain $c_k$, write down the alternating generating function $g(x)$ and (8) then gives us an integral expression for the summation.

When we sum a divergent summation this way, it's inevitable that the function $g(x)$ will not satisfy the conditions for which (8) is valid. However, we can then again appeal to analytic continuation to justify the integral as representing the regularized summation of the series.

We can derive (8) as follows. It follows from (9) that the series expansion of $\frac{g(x)}{1+x}$ can be written as:

$$\frac{g(x)}{1+x} = \sum_{k=0}^{\infty}(-1)^k a_k x^k$$

where:

$$a_k = \sum_{r=0}^k c_r\tag{10}$$

Using this version of Ramanujan's master theorem, it follows that:

$$\int_0^{\infty}\frac{g(x)}{1+x}dx = \lim_{s\to 1}\frac{\pi}{\sin(\pi s)} a_{-s} = \lim_{\epsilon\to 0} \frac{a_{-1+\epsilon}}{\epsilon}\tag{11}$$

We must then find an expression for the analytic continuation of $a_k$. We can do this by writing the summation in (10) as:

$$a_k = \sum_{r=0}^k c_r = \sum_{r=0}^{\infty}c_r - \sum_{r=k+1}^{\infty}c_r= \sum_{r=0}^{\infty}\left(c_r-c_{r+k+1}\right)$$

The limit in (11) can then be written as:

$$\lim_{\epsilon\to 0}\frac{a_{-1+\epsilon}}{\epsilon} = \lim_{\epsilon\to 0}\frac{1}{\epsilon}\sum_{r=0}^{\infty}\left(c_r-c_{r+\epsilon}\right)=-\sum_{r=0}^{\infty}\frac{dc_r}{dr}$$ which proves (8).

To compute the sum of the positive integers, we must put $c_k = -\frac{k^2}{2}$. The alternating generating function can be easily computed by putting $x = \exp(t)$ in:

$$\sum_{k=0}^{\infty}(-1)^k x^k = \frac{1}{1+x}$$

This yields:

$$\sum_{k=0}^{\infty}(-1)^k \exp(k t) = \frac{1}{1+\exp(t)}$$

We replace $t$ by $t+\epsilon$, expand to second order in $\epsilon$ and extract the coefficient of $\epsilon$. This will then yield the alternating generating function up to a minus sign. We have:

$$ \frac{1}{1+\exp(t+\epsilon)} = \frac{1}{1+\exp(t)}\frac{1}{1+\frac{\left(\epsilon+\frac{1}{2}\epsilon^2+\mathcal{O}\left(\epsilon^3\right)\right)\exp(t)}{1+\exp(t)}}$$

The alternating generating function $g(x)$ is minus the coefficient of $\epsilon^2$ of $\frac{1}{1+\exp(t+\epsilon)}$, we can read-off from the above expression that this is:

$$\frac{\exp(t)}{2[1+\exp(t)]^2} - \frac{\exp(2 t)}{[1+\exp(t)]^3}$$

Substituting back $\exp(t) = x$ yields the expression for $g(x)$:

$$g(x) = \frac{x}{2(1+x)^2} - \frac{x^2}{(1+x)^3}= \frac{x-x^2}{2(1+x)^3}$$

Cleary $g(x)$ is singular at $x = -1$ which means that the identity (8) is not valid, but it will yield the correct regularized summation per the usual analytic continuation argument. And just like in the other methods discussed above we don't need to set up an explicit analytic continuation from the domain where (8) is valid, we only need to assume that it exists and the result is then given by applying (8) to the case at hand:

$$\begin{split}\int_0^{\infty}\frac{g(x)}{1+x}dx &= \int_0^{\infty}\frac{x-x^2}{2(1+x)^4}dx = \int_1^{\infty}\frac{t-1 - (t-1)^2}{2t^4}dt = -\int_1^{\infty}\frac{2-3t +t^2}{2t^4}dt\\ &=-\frac{1}{12}\end{split}$$