The sum of fractional powers $\sum\limits_{k=1}^x k^t$.

2.3k Views Asked by At

This post is a continuation of Generalization of the Bernoulli polynomials ( in relation to the Index ) , the definition of the Bernoulli polynomial $B_t(x)$ with $|x|<1$ has an extension through $B_t(x+1)=B_t(x)+t x^{t-1}$.

Two equivalent definitions for $B_t(x)$ with $|x|<1$:

$$B_t(x):=-t\zeta(1-t,x)$$ or

\begin{align*} B_t(x+1):=&-\frac{2\Gamma(1+t)}{(2\pi)^t}\cos \left( \frac{\pi t}{2} \right) \sum_{k=0}^\infty (-1)^k \frac{(2\pi x)^{2k}}{(2k)!}\zeta(t-2k) \\ &-\frac{2\Gamma(1+t)}{(2\pi)^t}\sin \left( \frac{\pi t}{2} \right) \sum_{k=0}^\infty (-1)^k \frac{(2\pi x)^{2k+1}}{(2k+1)!}\zeta(t-1-2k) \end{align*}

with $-t\in\mathbb{R}\setminus\mathbb{N}$.

With https://www.researchgate.net/publication/238803313_Bernoulli_numbers_and_polynomials_of_arbitrary_complex_indices , page 86, Theorem 5, using equation (11) with the lower limit of $1$ instead of $0$ ($k=1$ instead of $k=0$) the formula for the sum of fractional powers is $$S_x(t):=\sum\limits_{k=1}^x k^t =\frac{B_{t+1}(x+1)-B_{t+1}(1)}{t+1}$$ with $x\in\mathbb{N}_0$ and $t\in\mathbb{R}_0^+$ (general: $t$ can be complex but I don’t need this possibility here).

The right side may be differentiated by $x$ and therefore one can write $$\frac{\partial}{\partial x} S_x(t)=B_t(x+1)$$ On the other hand differentiated by $t$ and the definition with $M_x(t):=\prod\limits_{k=1}^x k^{k^t} $ it's $$\ln M_x(t)=\frac{\partial}{\partial t}S_x(t)=\frac{\partial}{\partial t}\frac{B_{t+1}(x+1)-B_{t+1}(1)}{t+1}$$

Together one gets (by exchanging the derivatives, which is possible here) $$\frac{\partial}{\partial t}B_t(x+1)=\frac{\partial}{\partial x}\ln M_x(t)$$

Note:

Perhaps this equation becomes a bit clearer if one looks at $$\frac{\partial}{\partial t}\Delta B_t(x)=\frac{\partial}{\partial x}\Delta \ln M_{x-1}(t)$$ with $\Delta B_t(x):=B_t(x+1)-B_t(x)=tx^{t-1}$ and $\Delta \ln M_x(t):=\ln M_{x+1}(t)-\ln M_x(t)=(x+1)^t\ln(x+1)$.

The problem now is:

I need a formula for $\ln M_x(t)$ or $M_x(t)$, independend of $B_t(x)$ (otherwise it's a trivial identity), where $x$ and $t$ are variable. It could be a series of (more or less known) functions of $x$ (or perhaps $x$ and $t$) which becomes a sum/term for $t\in\mathbb{N}$ - similar to $B_t(x)$.

Alternative: To proof that the two definitions above for $B_t(x)$ are indeed equivalent (a link to the literatur is enough).

Note:

The Euler-MacLaurin-formula can perhaps give a formula for $\ln M_x(t)$. Does someone know a link, where this is computed ?

Addition:

Maybe http://ac.els-cdn.com/S0377042798001927/1-s2.0-S0377042798001927-main.pdf?_tid=36ead884-7132-11e6-ac53-00000aab0f6b&acdnat=1472837296_60501a990f4d37792d48c76ad38c7e4b , page 198, equation (21), can help. (I will see.)


An application example with $\ln M_x(1)$:

The fourier series of $B_t(x)$ is $$ \Re \left( \sum\limits_{k=1}^{\infty}{\frac{e^{i2\pi kx}}{\left( ik \right) ^t}} \right) =\frac{\left( 2\pi \right) ^t}{2\Gamma \left( 1+t \right)}B_t\left( x \right) $$ for $|x|<1$ and $t>0$.

It is known, that $\frac{d}{dx}\ln M_x(1)=-\ln\sqrt{2\pi}+\frac{1}{2}+x+\ln\Gamma(1+x)$.

Using
$$\frac{\partial}{\partial t}B_t(x)|_{t=1}=\frac{d}{dx}\ln M_{x-1}(1)$$ and derivating the fourier series of $B_t(x)$ (above) by $t$ and having regard to $(\ln\Gamma(1+t))'|_{t=1}=1-\gamma$ one gets

$$\sum_{k=1}^{\infty}{\frac{\ln k}{k}}\sin \left( 2\pi kx \right) =\frac{\pi}{2}\left( \ln \frac{\Gamma \left( x \right)}{\Gamma \left( 1-x \right)}-\left( 1-2x \right) \left( \gamma +\ln \left( 2\pi \right) \right) \right) $$

which can be seen in http://reader.digitale-sammlungen.de/en/fs1/object/display/bsb10525489_00011.html?zoom=1.0 (on the top of page 4) and in http://arxiv.org/pdf/1309.3824.pdf (page 30, formula 65.)


A second application example where I use $\frac{d}{dx}\ln M_x(m+1)|_{x=0}$ with $m\in\mathbb{N}_0$:

Adamchik had computed $$\zeta’(-m)=\frac{B_{m+1}H_m}{m+1}-A_m$$ where $B_n$ are the Bernoulli-numbers, $H_n$ are the harmonic numbers and $A_n$ are the generalized Glaisher-Kinkelin constants. See e.g. http://www.sciencedirect.com/science/article/pii/S0377042798001927 (Article; last page, equation (24)) .

Dissolving the equation (5.4) on page 36 of
https://www.fernuni-hagen.de/analysis/docs/bachelorarbeit_aschauer.pdf
for $\ln M_x(k)$, using $\frac{B_{k+1}(x+1+w_2)- B_{k+1}(1+w_2)}{k+1}$ instead of $\sum\limits_{j=1}^x (w_2+j)^k$ and setting $(w_1;w_2):=(1;0)$ results in

\begin{align*} \ln M_x(m)&=H_m\frac{B_{m+1}(x+1)- B_{m+1}(1)}{m+1}+\ln Q_m(x)+ \\ &+\sum_{k=0}^{m-1}\binom{m}{k}(-x)^{m-k}\sum_{v=0}^k \binom{k}{v}x^{k-v}(\ln A_v -\ln Q_v(x)) \end{align*}

The definition of $Q_m(x)$ is (4.2) on page 13, it’s something like a modified Multiple-Gamma-Function. $\frac{d}{dx}\ln M_x(m)$ can be computed by using the differentiation rule (4.4) for the equation above.

Now one gets with $B_t(1)=-t\zeta(1-t)$ and $\frac{d}{dt}B_t(1)|_{t=m}=\frac{d}{dx}\ln M_x(m)|_{x=0}$ the equation chain $$\frac{B_{m+1}(1)}{m+1}+(m+1)\zeta’(-m)= \zeta(-m)+(m+1)\zeta’(-m)=(-t\zeta(1-t))’$$ $$=\frac{d}{dt}B_t(1)|_{t=m+1}=\frac{d}{dx}\ln M_x(m+1)|_{x=0}=H_{m+1}B_{m+1}(1)-(m+1)\ln A_m$$ and this result dissolved for $\zeta’(-m)$ and took into account that $H_{m+1}-\frac{1}{m+1}=H_m$ and $H_m B_{m+1}(1)=H_m B_{m+1}$ for $m\in\mathbb{N}_0$ one gets Adamchik’s result.


Most simple solution for proofing $\displaystyle \frac{\partial}{\partial t}B_t(x+1)=\frac{\partial}{\partial x}\ln M_x(t)$

by using the 2nd development of G Cab with the Hurwitz Zeta function:

$\zeta(a,b):= \sum\limits_{k=0}^\infty (b+k)^{-a}$

$\displaystyle \frac{B_{t+1}(x+1)-B_{t+1}(1)}{t+1}=S_x(t)=\zeta(-t,1)-\zeta(-t,x+1)$ and therefore
$\displaystyle \frac{\partial}{\partial t}S_x(t)=\ln M_x(t)=\sum\limits_{k=0}^\infty (k+1)^t\ln(k+1) - \sum\limits_{k=0}^\infty (k+x+1)^t\ln (k+x+1)$

$\displaystyle \frac{\partial}{\partial x}S_x(t)= B_t(x+1)=-t\zeta(1-t,x+1)\,$ (as mentioned by gammatester, first link above)

\begin{align*} \frac{\partial}{\partial t}B_t(x+1)&= \frac{\partial}{\partial t}\frac{\partial}{\partial x}(\zeta(-t,1)-\zeta(-t,x+1)) \\ &=\frac{\partial}{\partial x}\frac{\partial}{\partial t}(\zeta(-t,1)-\zeta(-t,x+1))=\frac{\partial}{\partial x}\ln M_x(t) \end{align*}

Note:

Substituting $B_t(x)$ and $\ln M_x(t)$ by other formulas are leading to non-trivial equations (as shown in the application examples above).

2

There are 2 best solutions below

12
On BEST ANSWER

This is just a "what if ?" consideration, not an answer, and I just guess that it might be of some help to your scope. So, flanking the analysis you are conducting, you may consider this alternative development for $S_x(t)$.

  • 1st development $$ \begin{gathered} S_x (t) = \sum\limits_{k = 1}^x {k^{\,t} } = \sum\nolimits_{\;k = 1}^{\;x + 1} {k^{\,t} } = \frac{{B_{\,t + 1} (x + 1) - B_{\,t + 1} (1)}} {{t + 1}} = \quad \quad \left( \text{1} \right) \hfill \\ = \sum\nolimits_{\;k = 0}^{\;x} {\left( {k + 1} \right)^{\,t} } = \sum\nolimits_{\;k = 0}^{\;x} {\sum\limits_{0\, \leqslant \,j} {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)k^{\,j} } } = \sum\limits_{0\, \leqslant \,j} {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\sum\nolimits_{\;k = 0}^{\;x} {k^{\,j} } } = \hfill \\ = \sum\limits_{0\, \leqslant \,j} {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\left( {\frac{{B_{\,j + 1} (x) - B_{\,j + 1} (0)}} {{j + 1}}} \right)} = \quad \quad \left( 2 \right) \hfill \\ = \sum\nolimits_{\;k = 0}^{\;x} {\sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,l\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\left\{ \begin{gathered} j \\ l \\ \end{gathered} \right\}k^{\,\underline {\,l\,} } } } = \hfill \\ = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,l\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\left\{ \begin{gathered} j \\ l \\ \end{gathered} \right\}\frac{{x^{\,\underline {\,l + 1\,} } }} {{l + 1}}} = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,l\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\frac{{t^{\,\underline {\,j\,} } }} {{j!}}\left\{ \begin{gathered} j \\ l \\ \end{gathered} \right\}\frac{{x^{\,\underline {\,l + 1\,} } }} {{l + 1}}} \quad \quad \left( 3 \right) \hfill \\ \end{gathered} $$ where the symbol $\sum\nolimits_{\;k = 1}^{\;x + 1} {}$ indicates the indefinite sum , computed between the indicated bounds, and the curly backets the Stirling N. of 2nd kind.
    To the purpose of derivating vs. $t$ and $x$ , you may replace the falling factorials $t^{\,\underline {\,j\,} } $ and$x^{\,\underline {\,l + 1\,} } $ with the corresponding Stirling devopment in $t^n$ and $x^m$ or with their expression through the Gamma function.
  • 2nd development
    You can also write $S_x(t)$ in terms of the Hurwitz zeta function $$ \begin{gathered} S_x (t) = \sum\limits_{k = 1}^x {k^{\,t} } = \sum\nolimits_{\;k = 1}^{\;x + 1} {k^{\,t} } = \hfill \\ = \sum\nolimits_{\;k = 1}^{\;\infty } {k^{\,t} } - \sum\nolimits_{\;k = x + 1}^{\;\infty } {k^{\,t} } = \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + 1} \right)^{\,t} } - \sum\nolimits_{\;j = 0}^{\;\infty } {\left( {j + x + 1} \right)^{\,t} } = \hfill \\ = \zeta ( - t,1) - \zeta ( - t,x + 1)\quad \quad \left( 4 \right) \hfill \\ \end{gathered} $$
  • Note concerning the handling of sums and products with non-integer bounds
    First let's note that $$ \begin{gathered} S_x (t) = \sum\limits_{k = 1}^x {k^{\,t} } \quad \Rightarrow \hfill \\ \Rightarrow \quad x^{\,t} = S_{x + 1} (t) - S_{x + 1} (t) = \left( {S_{x + 1} (t) + c(x + 1)} \right) - \left( {S_x (t) + c(x)} \right) \hfill \\ \end{gathered} $$ and $$ \begin{gathered} M_x (t) = \prod\limits_{k = 1}^x {k^{\,k^{\,t} } } = \prod\nolimits_{\;k = 1\;}^{\;x + 1} {k^{\,k^{\,t} } } = \prod\nolimits_{\;k = 0\;}^{\;x} {\left( {k + 1} \right)^{\,\left( {k + 1} \right)^{\,t} } } \quad \Rightarrow \hfill \\ \Rightarrow \quad \left( {x + 1} \right)^{\,\left( {x + 1} \right)^{\,t} } = \frac{{M_{x + 1} (t)}} {{M_x (t)}} = \frac{{c(x + 1)M_{x + 1} (t)}} {{c(x)M_x (t)}} \hfill \\ \end{gathered} $$ with $$ c(x)\;:\quad \text{any}\,\text{periodic}\,\text{function}\text{,}\,\text{with}\,\text{period}\,\;1 $$ Then let's take for example the starting base of your development, we get the following two different "definitions" for $B_t(x+1)$ $$ \begin{array}{*{20}c} {S_x (t) = \frac{{B_{\,t + 1} (x + 1) - B_{\,t + 1} (1)}} {{t + 1}}} \hfill & \begin{gathered} \hfill \\ = \hfill \\ \hfill \\ \end{gathered} \hfill & \begin{gathered} = \sum\limits_{k = 1}^x {k^{\,t} } = \sum\nolimits_{\;k = 1}^{\;x + 1} {k^{\,t} } = \hfill \\ = \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + 1} \right)^{\,t} } - \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t} } \hfill \\ \end{gathered} \hfill \\ \hline \begin{gathered} \quad \quad \quad \quad \Downarrow \hfill \\ \frac{\partial } {{\partial \,x}}S_x (t) = \hfill \\ = \frac{1} {{t + 1}}\frac{\partial } {{\partial \,x}}B_{\,t + 1} (x + 1) = \hfill \\ = B_{\,t} (x + 1) = \hfill \\ = - t\sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t - 1} } \hfill \\ \end{gathered} \hfill & \begin{gathered} | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ \end{gathered} \hfill & \begin{gathered} \quad \quad \quad \quad \Downarrow \hfill \\ \frac{{B_{\,t + 1} (x + 1)}} {{t + 1}} = f(t + 1) - \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t} } \hfill \\ \quad \quad \quad \quad \Downarrow \hfill \\ \hfill \\ B_{\,t} (x + 1) = \hfill \\ = t\,f(t) - t\sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t - 1} } \hfill \\ \end{gathered} \hfill \\ \end{array} $$ where
  • the derivate in $x$ is first taken in extending to real index the known property for integer index, and then by derivating the espression of $S(x)$ as difference of the two sums;
  • $f(t)$ can be any function in $t$, and in particular it could be $B_t(1)$, which in turn can be taken as $t\;\zeta (1 - t)$, as it is in many papers concerning the extension of Bernoulli polynomials.

Thus it is evident that such mathematical entities shall be handled with great care, and specially when taking derivatives.

11
On

Applying the Euler-Maclaurin Sum Formula

The Euler-Maclaurin Sum Formula can be applied to $k^t$ to get the approximation $$ \sum_{k=1}^nk^t=\zeta(-t)+\frac1{t+1}n^{t+1}+\frac12n^t+\frac{t}{12}n^{t-1}-\frac{t^3-3t^2+2t}{720}n^{t-3}+O\!\left(n^{t-5}\right) $$ When $t\lt-1$, this describes how the series for $\zeta(-t)$ converges.


Possible Extension to Non-Integral Summation Limits

Consider $$ \begin{align} \lim_{\delta\to0}\frac1\delta\left(\sum_{k=1}^{n+\delta}k^t\color{#C00000}{-\sum_{k=1}^{m+\delta}k^t}\right) &=\lim_{\delta\to0}\frac1\delta\sum_{k=m+1+\delta}^{n+\delta}k^t\\ &=\lim_{\delta\to0}\frac1\delta\sum_{k=m+1}^n(k+\delta)^t\\ &=t\sum_{k=m+1}^nk^{t-1} \end{align} $$ Thus, if we give a meaning to taking a derivative with respect to the upper limit of summation, it would give $$ \frac{\mathrm{d}}{\mathrm{d}n}\sum_{k=1}^nk^t=t\sum_{k=1}^nk^{t-1}\color{#C00000}{+C} $$ where $C$ is related to the behavior near $m=0$.