Does $\sum_{n=1}^\infty \frac{1}{nH_n^2}$ (& related) have any closed form representaions?

223 Views Asked by At

We know that: $$\sum \frac{1}{n^{1+\epsilon}} \quad converges \ \forall \epsilon>0 \quad \& \quad \sum \frac{1}{n} \sim \ ln(n) \tag{1}\label{asymp1}$$ Using this we define: $$\gamma_1:= \lim_{n\to\infty} \{ \sum_{k=1}^n \frac{1}{k} \ - \ ln(n) \} \tag{2}\label{gamma1} $$ Which is same as Euler-Mascheroni Constant.

Doing the same on $ln(n)$ this time in $\eqref{asymp1}$ (and using the fact Harmonic series $H_n$ is the Discrete analog of log) we have: $$\sum \frac{1}{n\ H_n^{1+\epsilon}} \quad converges \ \forall \epsilon>0 \tag{3}\label{conv1}$$ & $$\sum \frac{1}{n\ H_n} \quad \sim \quad ln(ln(n)) \tag{4}\label{asymp2}$$

Due to \eqref{asymp2}, we can define: $$ \gamma_2 := \lim_{n\to\infty}\{\sum_{k=1}^n \frac{1}{k\ H_k} \ - \ ln(ln(n)) \} \tag{5}\label{gamma2} $$

It converges very slowly that I could find it only till few deicmal places around $0.927...$ with a very low confidence if it's close enough.

Out of Curiosity, one can also ask from \eqref{conv1} if the following sum: $$ \gamma_2' := \sum_{n=1}^{\infty} \frac{1}{n\ H_n^2} \tag{6}\label{gamma2'} $$ (which also converges very slowly) has some "nice" value. Wolfram calculates it around $1.84825..$ (again with low confidence)

My Questions are:

  1. Does the Sum $\eqref{gamma2} \& \eqref{gamma2'}$ have any Closed form Representations?
  2. Since we have lot of fast Converging methods for calculating $\gamma$ by manipulating $\eqref{gamma1}$, Can we derive similar fast Converging methods to calculate $\eqref{gamma2} \& \eqref{gamma2'}$?
  3. If both can't be answered, what values do you get for those series and how?
1

There are 1 best solutions below

0
On BEST ANSWER

Here are my computations. Since the notation $\gamma_n$ is used for Stieltjes constants, let's use $$\eta_2:=\sum_{n=1}^\infty\frac1{nH_n^2},\qquad\eta_1:=\lim_{N\to\infty}\left(\sum_{n=1}^N\frac1{nH_n}-\log\log N\right)$$ instead. The formula given by Dr. Wolfgang Hintze is based on the idea that, as $n\to\infty$, the summand $1/(nH_n^2)$ is close to $1/H_{n-1}-1/H_n=1/(nH_nH_{n-1})$. Similarly, $1/(nH_n)$ is close to $\ell(n+1/2)-\ell(n-1/2)$, where $\ell(x)=\log(\gamma+\log x)$; the difference is $O\big((n\log n)^{-2}\big)$ as $n\to\infty$. Thus, just replacing $\log\log N$ by $\log\big(\gamma+\log(N+1/2)\big)$ in the definition of $\eta_1$ already gives a considerable acceleration.

In both cases, we get a remainder of something like $o(1/N)$, good for getting a few decimals. Much more efficient approaches are Euler–Maclaurin summation and Abel–Plana formula. Variations of these are implemented in PARI/GP (as sumnum and sumnumap), but fail to work out of the box, as well as for the summand $n\mapsto1/(n\log^2n)$ and the like. In the second case, the formula is $$\sum_{n=k}^\infty f(n)=\int_a^\infty f(t)\,dt-i\int_0^\infty\frac{f(a+it)-f(a-it)}{e^{2\pi t}+1}\,dt,\qquad a=k-\frac12$$ and it seems that PARI/GP fails to compute the $\int_a^\infty$, because of the logarithmic singularity at $\infty$.

I have implemented it myself, using $f=f_1$ and $f=f_2$ in the above respectively, where $$f_1(t)=\frac1{t\big(\gamma+\psi(1+t)\big)}-\log\frac{\gamma+\log(t+1/2)}{\gamma+\log(t-1/2)},\qquad f_2(t)=\frac1{t\big(\gamma+\psi(1+t)\big)^2},$$ and computing $\int_a^\infty f(t)\,dt$ as $\int_{\log a}^\infty e^x f(e^x)\,dx$; the asymptotics of $\psi$ is used for large arguments.

compute(order, summand, asympto, infspec) = {
    my (rbp = default(realbitprecision));
    my (approxpol = Pol(asympto(order)));
    my (exprbp(x) = if (x < -rbp, 0, exp(x)));
    my (realapx(t) = substvec(approxpol, [x, y], [exprbp(-t), 1 / (Euler + t)]));
    my (realfun(t) = if (t < rbp / order, exp(t) * summand(exp(t)), realapx(t)));
    my (cplxfun(t) = imag(summand(3/2 + I * t)) / (exp(2 * Pi * t) + 1));
    my (realint = intnum(t = log(3/2), infspec, realfun(t)));
    my (cplxint = intnum(t = 0, [oo, 2 * Pi], cplxfun(t)));
    return (realint + 2 * cplxint)
};
psitail(order) = sum(n = 1, order - 1, bernfrac(n) * x^n / n) + O(x^order);
eta2(order) = 1 + compute(order, \
    z -> 1 / z / (Euler + psi(1 + z))^2, \
    order -> (y / (1 - y * psitail(order)))^2, \
    oo);
eta1(order) = 1 - log(Euler + log(3/2)) + compute(order, \
    z -> 1 / z / (Euler + psi(1 + z)) + log(Euler + log(z - 1/2)) - log(Euler + log(z + 1/2)), \
    order -> y / (1 - y * psitail(order)) - (log1p(y * log1p(x/2)) - log1p(y * log1p(-x/2))) / x, \
    [oo, 1]);

Playing with different (small) values of order, I get \begin{align*} \eta_2&=1.84825451761121890381193149770016003913834816035665695+\\ \eta_1&=0.88711857460166147421856128474012387410486774741815135+ \end{align*}