How did the cutoff regulization expanded to $\sum_{n=1}^\infty n=-\frac{1}{12}$?

143 Views Asked by At

On the Wikipedia, the Cutoff regularization, or the asymptotic behavior of the smoothing, was described to be

Smoothing is a conceptual bridge between zeta function regularization, with its reliance on complex analysis, and Ramanujan summation

where in the reference, they kind skipped the derivation a little, just stated that the cut off function was $\eta\in C^2$ that's equal to $1$ at $0$ and then taylor expand $\eta$, where $$\sum_{n=1}^\infty(-1)^{n-1}\eta(n/N)=\frac{\eta(1/N)}{2}+\sum_{m=1}^\infty \frac{\eta((2m-1)/N)-2\eta(2m/N)+\eta((2m+1)/N)}{2}$$

Could you show how exactly was the taylor expansion done, please? Especially, how did the cutoff regularization $$\sum_{n=1}^\infty n\eta(n/N)=-\frac{1}{12}+C_{\eta,1} N^2+O(\frac{1}{N})$$?

Because, when talking about the asymptotic behavior, one intuitively went to complete the square, i.e. $$\frac{1}{2}(N^2+N)=\frac{1}{2}(N+\frac{1}{2})^2-\frac{1}{8}$$. Why wasn't it agree with the cutoff regularization?

2

There are 2 best solutions below

0
On BEST ANSWER

Let $k\in\{0,1,2,\ldots\}$ and let $\eta \in C^{k+1}_c(\mathbb{R})$ be arbitrary. Also, define $F$ by

$$ F(x) = \int_{0}^{x} t^k \eta(t) \, \mathrm{d}t. $$

Now let ue write $x_{n} = n/N$. Then by the Taylor theorem, for each $j \in \{0,1,\ldots,k\}$, there exists $\xi_{n,j} \in [x_n, x_{n+1}]$ such that

\begin{align*} F^{(j)}(x_{n+1}) = F^{(j)}\left(x_{n}+\frac{1}{N}\right) = \sum_{l=j}^{k+1} \frac{F^{(l)}(x_n)}{(l-j)!} \frac{1}{N^{l-j}} + \frac{F^{(k+2)}(\xi_{n,j})}{(k+2-j)!} \frac{1}{N^{k+2-j}}. \end{align*}

Multiplying $N^{k+1-j}$ to both sides and manipulating, this is equivalent to

\begin{align*} N^{k+1-j} \left[ F^{(j)}(x_{n+1}) - F^{(j)}(x_n) \right] &= \sum_{l=j+1}^{k+1} N^{k+1-l} \frac{F^{(l)}(x_n)}{(l-j)!} + \frac{F^{(k+2)}(\xi_{n,j})}{(k+2-j)!} \frac{1}{N}. \end{align*}

Now we will sum both sides of the above equality over $n = 0,1,2,\ldots$. In doing so, we observe that

$$ F^{(j)}(\infty) = \begin{cases} F(0) = \int_{0}^{\infty} t^k \eta(t) \, \mathrm{d}t, & j = 0, \\ 0, & j \geq 1, \end{cases} $$

and

$$ F^{(j)}(0) = \begin{cases} 0, & 0 \leq j \leq k, \\ k!\eta(0), & j = k+1. \end{cases} $$

So, if we write $\sigma_N^{(j)} = \sum_{n=0}^{\infty} F^{(j)}(x_n) $, then by telescoping and using the Riemann sums, we get

\begin{align*} \sum_{l=j+1}^{k+1} \frac{1}{(l-j)!} N^{k+1-l} \sigma_N^{(l)} &= N^{k+1-j}\left[ F^{(j)}(\infty) - F^{(j)}(0) \right] - \frac{1}{(k+2-j)!} \sum_{n=0}^{\infty} F^{(k+2)}(\xi_{n,j})\frac{1}{N}\\ &= N^{k+1} F(\infty) \mathbf{1}_{\{j=0\}} + \frac{k!}{(k+2-j)!}\eta(0) + o(1). \end{align*}

This system of equations can be put into a matrix form:

$$ \underbrace{ \begin{pmatrix} \frac{1}{1!} & \frac{1}{2!} & \cdots & \frac{1}{(k+1)!} \\ 0 & \frac{1}{1!} & \cdots & \frac{1}{k!} \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & \frac{1}{1!} \end{pmatrix} }_{=:A} \begin{pmatrix} N^k \sigma_N^{(1)} \\ N^{k-1} \sigma_N^{(2)} \\ \vdots \\ \sigma_N^{(k+1)} \end{pmatrix} = \begin{pmatrix} N^{k+1} F(0) \\ 0 \\ \vdots \\ 0 \end{pmatrix} + \begin{pmatrix} \frac{1}{(k+2)!} \\ \frac{1}{(k+1)!} \\ \vdots \\ \frac{1}{2!} \end{pmatrix} k!\eta(0) + o(1) $$

By noting that the inverse of $A$ is given by

$$ A^{-1} = \begin{pmatrix} \frac{B_0}{0!} & \frac{B_1}{1!} & \cdots & \frac{B_k}{k!} \\ 0 & \frac{B_0}{0!} & \cdots & \frac{B_{k-1}}{(k-1)!} \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & \frac{B_0}{0!} \end{pmatrix}, $$

where $B_0, B_1, \ldots$ are the Bernoulli numbers, it follows that

$$ \sum_{n=0}^{\infty} n^k \eta\left(\frac{n}{N}\right) = N^k\sigma_N^{(1)} = N^{k+1}\int_{0}^{\infty} t^k\eta(t) \, \mathrm{d}t + \sum_{j=0}^{k} \frac{B_j}{j!(k+2-j)!} k!\eta(0) + o(1). $$

Using the identity $\sum_{j=0}^{k+1} \binom{k+2}{j} B_j = 0$, this further simplifies to

$$ \sum_{n=0}^{\infty} n^k \eta\left(\frac{n}{N}\right) = N^{k+1}\int_{0}^{\infty} t^k\eta(t) \, \mathrm{d}t - \frac{B_{k+1}}{k+1}\eta(0) + o(1). $$

Moreover, if we assume in addition that $\eta$ is in $C^{k+2}_c(\mathbb{R})$, then it is not hard to check that the error term is in fact $\mathcal{O}(1/N)$.


Remark. The above computation is merely an adaptation of the proof of the Euler–Maclaurin formula, highlighting how the Taylor's theorem can be directly connected to the smoothing regularization. Other than that, the above conclusion is a straightforward consequence of the Euler–Maclaurin formula.

5
On

Taylor expanding the first and last terms about $\frac{2m}{N}$ gives $$\eta((2m\pm 1)/N) = \eta(2m/N) \pm \frac{1}{N}\eta'(2m/N) + \frac{1}{N^2}\eta''(2m/N) + O(1/N^3), $$ so we get a summand of $$ \eta((2m+1)/N) + \eta((2m- 1)/N)-2\eta(2m/N) = \frac{2}{N^2}\eta''(2m/N) + O(1/N^3) = O(1/N^2),$$ thus showing the summands have size $O(1/N^2).$ This is the Taylor expansion he's referring to.

But I think you are confused. Tao isn't doing a general method here, just an ad-hoc calculation for $ \sum_{n=1}^\infty (-1)^{n-1} \eta(n/N).$ There is no claim that this works for $\sum_{n=1}^\infty n\eta(n/N)$... he writes down the asymptotics for those around the same place, but defers the analysis till the next section titled "Smoothed asymptotics".

(With regard to your last remark $\frac{1}{2}(N^2+N)$ would be the "hard cutoff" asymptotics for $\sum_n n$, and it seems the point of this article is that this is not really useful compared to smoothing it out, which gives more universal behavior with a connection to analytic continuation. I don't know what you're going for with completing the square... it seems irrelevant. )