What makes $\frac{x(x+1)}{2}$ a "better" interpolant of the sum of the first $n$ positive integers than any other, and likewise in similar cases?

217 Views Asked by At

A long-time interest of mine has been trying to determine if there is some "natural" criterion by which we can characterize various interpolants of functions that are at first only defined for integer or positive-integer inputs, such as exponentiation and the factorial function, which generalize to the exponential and gamma functions. The aim ultimately is to create an explicit and efficient series formula for the interpolation of tetration

$$^n a = \underbrace{a^{a^{a^{\cdots^a}}}}_\text{$n$ copies of $a$}$$

to noninteger values of $n$, in some suitably natural manner. But before we can get there, it seems more profitable to first look at simpler cases, and one of the simplest such cases seems to be interpolating discrete sums. For example, consider

$$f(n) = \sum_{k=1}^{n} k.$$

As written, this definition makes no sense for, say, $f\left(\frac{1}{2}\right)$. However, it is not hard to see that this can be converted into a "closed form" that, even better, doubles as an interpolative extension:

$$\sum_{k=1}^{n} k = \frac{n(n+1)}{2}$$

which allows us to expand to real $x$ "lengths" via

$$\sum_{k=1}^{x} k := \frac{x(x+1)}{2}, x \in \mathbb{R}$$

(or even $x \in \mathbb{C}$!) And then we can find

$$\sum_{k=1}^{1/2} k = f\left(\frac{1}{2}\right) = \frac{3}{8}.$$

Another example where we can find such a sum is in the case of exponential functions. Consider

$$\sum_{k=0}^{n-1} 2^k$$

for positive integer $n$. It is easy to see this sums to

$$\sum_{k=0}^{n-1} 2^k = 2^n - 1$$

which lets us find that

$$\sum_{k=0}^{1/2} 2^k\ \text{"should be"}\ \sqrt{8} - 1 \approx 1.8284.$$

And of course for arbitrary $b$, we have the geometric series

$$\sum_{k=0}^{n-1} b^k = \frac{b^n - 1}{b - 1}.$$

However, what about for more general functions $f$? The fundamental problem is that, strictly speaking, these interpolants are not unique: most generally, we can "connect the dots" with any curves we like. Even if we impose some "natural" restrictions, say that

$$\sum_{k=0}^{x} f(k) = f(x) + \sum_{k=0}^{x-1} f(k)$$

it isn't enough - the above reduces that freedom to now just an arbitrary 1-cyclic shift. Yet, "for some intuitive reason", the above interpolants "seem just right": they are simple, comparable to the functions they came from, and are suitably "graceful", while other interpolants are necessarily more "wiggly" owing to the 1-cyclic displacement just mentioned. But is there something rigorous that both singles them out over all others, and which allows us to do similar interpolations on a very general range of functions $f$, with such interpolations considered their analogues? Or to make it simpler, what distinguishes $\frac{x(x+1)}{2}$ uniquely from, say, $\frac{x(x+1)}{2} + \sin(2\pi x) - \pi^{\gamma \Gamma\left(\frac{5}{3}\right)} \cos(4\pi x)$ or something, as an interpolant of that particular sum and which it shares with $2^x - 1$ (up to translation) versus $2^x$?

2

There are 2 best solutions below

3
On

(Started as a comment, but became too long for one.)

I don't know that there exists a universal "natural" criterion for such choices, but rather case-by-case rationales why a certain choice may be more suitable in the context.

Taking $\,f(n) = \sum_{k=1}^{n} k\,$ for example, and as noted in the post, the general solution of the functional equation $\,f(x) = f(x-1) + x\,$ is $\,f(x) = \frac{x(x+1)}{2} + c(x)\,$ where $\,c(x)\,$ is any periodic function of period $\,1\,$.

  • The simplest choice would of course be $\,c(x) \equiv 0\,$.

  • The $\,3^{rd}\,$ order finite differences of $\,f(n)\,$ are zero, so it would make sense to look for a continuation with zero $\,3^{rd}\,$ derivative $\,f'''(x) \equiv 0\,$, which requires $\,c(x)\,$ to be constant.

  • $\,f(n)\,$ is a polynomial, so it would make sense to look for a polynomial $\,f(x)\,$, which also requires $\,c(x)\,$ to be constant, since the only periodic polynomials are the constant ones.

None of the above requires $\,c(x)\,$ to be chosen as zero or a constant, yet they justify that choice.

There are cases, however, where the choice is not as clear cut. Take for example $\,f(n) = n!\,$. There are (at least) two continuations to the real (or complex) domain - the well known Gamma function (which is the only log-convex solution to the factorial recursion, per the Bohr–Mollerup theorem), and the lesser known Hadamard's gamma function (which, unlike $\,\Gamma\,$, is an entire function). Peter Luschny's page on Hadamard versus Euler makes for a good reading on this, and more.

2
On

Hint: You might be interested in Summability Calculus - A Comprehensive Theory of Fractional Finite Sums by I. M. Alabdulmohsin.

In this monograph finite sums are treated with the goals (from section 1.3):

  • Extension of the domain of finite sums to the complex plane $\mathbb{C}$.

  • Perform infinitesimal calculus on discrete finite sums: Differentiation, Integration, Taylor Series expansion, ...

  • Determine the asymptotic behaviour of discrete finite sums.

In order to work with fractional sums an important consideration is: A finite sum \begin{align*} f(n)=\sum_{k=0}^{n-1}g(k) \end{align*} essentially represents a shorthand notation for an iterated addition, naturally restricted to discrete values. The author then uses ideas from How to Add a Noninteger Number of Terms: From Axioms to New Identities by M. Müller and D. Schleicher and introduces two additional properties, that naturally extends the domain of $f(n)$ to the complex plane $\mathbb{C}$: \begin{align*} &\color{blue}{\forall x,g(\cdot):\sum_{k=x}^xg(k)=g(x)}\\ &\color{blue}{\forall a,b,c,g(\cdot):\sum_{k=a}^bg(k)+\sum_{k=b+1}^xg(k)=\sum_{k=a}^xg(k)}. \end{align*}

Euler-MacLaurin summation formula: The author develops in corollary 2.3 the Euler-MacLaurin Summation Formula in the following form:

  • Let $f(n)$ be a simple finite sum given by $f(n)=\sum_{k=0}^{n-1}g(k)$, where $g(k)$ is regular at the origin, and let $f_G(n)$ be given formally by the following series expansion around $n=0$, where $B_k$ are the Bernoulli numbers: \begin{align*} f_G(n)=\sum_{k=0}^\infty \frac{c_k}{k!}n^k,\qquad c_k=\sum_{r=0}^\infty\frac{B_r}{r!}g^{(r+k-1)}(n)-\sum_{j=0}^{n-1}g^{(k)}(j) \end{align*} then $f_G(n)$ is given formally by \begin{align*} \color{blue}{f_G(n)=\int_{0}^ng(t)\,dt+\sum_{r=1}^{\infty}\frac{B_r}{r!}\left(g^{(r-1)}(n)-g^{(r-1)}(0)\right)}\tag{1} \end{align*} The Euler-MacLaurin formula is heavily used throughout the book. Using (1) the author provides for instance in (2.5.5): \begin{align*} f_G(n)=\sum_{k=0}^{n-1}\sqrt{1+k}=-\frac{\zeta(1/2)}{2}n+\sum_{r=2}^\infty(-1)^r\frac{(2r-3)!}{4^{r-1}r!(r-2)!}\zeta(r-1/2)n^r \end{align*}

Hyperfactorial and superfactorial function:

It might be of interest, that also the hyperfactorial function $H(n)=\prod_{k=1}^nk^k$ and the superfactorial function $S(n)=\prod_{k=1}^nk!$ can be treated with this kind of fractional summability calculus. The author states for instance in (2.5.19) the log-hyperfactorial function $f(n)=\sum_{k=0}^{n-1}(1+k)\log(1+k)$ as \begin{align*} \sum_{k=0}^{n-1}(1+k)\log(1+k)=\left(\frac{1-\log (2\pi)}{2}\right)n+\frac{1-\gamma}{2}n^2+\sum_{k=2}^\infty(-1)^k\frac{\zeta(k)}{k(k+1)}n^{k+1} \end{align*}

Some gems:

  • In section 6.2 the following representation of the hyperfactorial function $H(n)=\prod_{k=1}^nk^k$ is stated (6.2.16): \begin{align*} \prod_{k=1}^nk^k=4e^{n(n-1)/2-1}\prod_{k=2}^\infty\frac{(k-1)^n(1-\frac{1}{k})^{-k(n-2)-n(n-1)/2}(k+1)^{k+1}} {k(k-1)(k+n-1)^{k+n-1}} \end{align*}

  • Exercise 6.7 asks to prove (6.2.19): \begin{align*} \sum_{k=0}^{n-1}\log k!=-\frac{\gamma}{2}n(n-1)+\log\prod_{k=0}^\infty\frac{k!\cdot(k+1)^n}{(k+n)!}\,e^{\frac{n(n-1)}{2(k+1)}} \end{align*} with special case (6.2.20): \begin{align*} \sum_{k=0}^{-\frac{3}{2}}\log k!=\frac{\log 2}{24}-\frac{\log \pi}{4}+\frac{1}{8}-\frac{3\log A}{2} \end{align*} Here $A= 1.282\,427\,\ldots$ is Glaisher's constant.

Note: This monograph is not elementary as it uses for instance specific summation methods (Cesàro means, Borel summability, etc.) and the theory of summability of divergent series in general. From section 1.6 Historical Remarks:

  • In the twentieth century, the most well-renowned summability theorist was Hardy, whose classic book Divergent Series remains authoritative.