Borel summation of a divergent series as the best estimate of a function

141 Views Asked by At

My current research has me working with series of the form $\sum_{n=0}^\infty f_n z^n$, $z\in\mathbb{C}$ that usually have zero convergence radius (they diverge for all values of $z$). Due to unrelated reasons, I need to assume that each of these series is asymptotic to some function $f(z)$ that is well-defined near $z=0$ (not necesarily at $z=0$): $f(z)\sim \sum_{n=0}^\infty f_n z^n$.

As it is stated here (under the uniqueness section), a series can be asymptotic to many functions, and the best possible estimate is the finite function $\sum_{n=0}^N f_n z^n$, for $N>0$ such that the truncation error $\mathcal{E}(z,N)\equiv f(z)-\sum_{n=0}^N f_n z^n$ is minimum. Obviusly, if one does not know $f(z)$, $\mathcal{E}(z,N)$ cannot be computed and minimized.

It is a customary practice in my field (and it is also mentioned in the link above) to believe that the procedure of Borel summation (for which $f(z)$ does not need to be known) returns the best possible estimate of $f(z)$ that can be constructed with the series $\sum_{n=0}^\infty f_n z^n$. However I have found no formal proof of this, and several questions arise:

  1. Does the Borel sum return the same estimate one would obtain if $f(z)$ was known and $\mathcal{E}(z,N)$ could be minimized? Or is it "the best possible estimate given $f(z)$ is not known"?
  2. In any of the two above cases, is there any proof?
  3. The link indicates Watson's theorem and Carelman's theorem do show Borel summation produces the best estimate, but I fail to see how.

I am only interested in series whose coefficients $f_n$ grow as $a^n n^b n!$ for $a,b$ constants, so a proof that Borel summation is returns the best estimate for that class of divergent series is enough.

Thanks in advance!