Numerical analytic continuation for $\tanh$-like function

225 Views Asked by At

I have a class of functions $f(z) \in \mathcal{F}$ of which I know the Taylor series at $z=0$ up to some free to choose order $N$. I am trying to numerically estimate the value of $f(z)$ outside its radius of convergence. The functions have the following properties:

  • Analytic in a neighborhood of $z=0$.
  • For $z\to \infty$ with $\Re(z)>0$ $|f(z) - c_+| = O(e^{-g_+ \Re(z)})$, with $g_+>0$
  • For $z\to \infty$ with $\Re(z)<0$ $|f(z) - c_-| = O(e^{g_- \Re(z)})$, with $g_->0$
  • The functions have poles on a vertical line in the complex plane with constant real part
  • The functions are everywhere else analytic.

Edit 22/12/2022

  • The functions have additionally the following property. Defining

$$ f(t) = \sum_{n=0}^\infty a_n t^n$$

The coefficients grow as $\vert a_n \vert \sim c^n$ for some positive constant $c$. I can renormalize $f$ such that $c<1$ but then the asymptotic behavior at $t\to \infty$ is attained for larger $t$ so I think nothing really has been gained.

end edit

An example of the kind of function I am talking about is given by the function

$$ f(t) = \tanh( g(t-t_0)) $$

with $g,t_0 \in \mathbb{R}$, $g>0$. The function has poles at

$$ t = t_0 +i \frac{\pi}{g} \left ( n +\frac{1}{2} \right ), \quad n\in\mathbb{Z} $$

and converges to $\pm 1$ for $t\to \pm\infty $. I would like to estimate the value $c_+ = \lim_{t\to+\infty} f(t) = 1$ knowing its Taylor series at zero.

Note that, for $t\to +\infty$

$$ f(t) = 1 +2 e^{-2g(t-t_0)} + O( e^{-4g(t-t_0)}) $$

so it suffices a value of $t$ such that $2g(t-t_0)$ is greater than, say, $4,5$ to obtain a good approximation of $c_+$.

I have tried Padè approximants but they don't perform so well. I suspect that a modification of the answer to this question may work but this is definitely outside my expertise.

Edit 24/12/2022

Here is an example of such a function up to order 21:

\begin{align} f(t) &= 1.\, -768. t+9420. t^2+27112. t^3-4.01876\times 10^6 t^4+8.1376\times 10^7 t^5 \\ & +2.7442\times 10^8 t^6-4.88657\times 10^{10} t^7+7.90082\times 10^{11} t^8+1.02608\times 10^{13} t^9\\ & -5.72507\times 10^{14} t^{10}+5.21309\times 10^{15} t^{11}+1.81363\times 10^{17} t^{12}\\ &-5.74547\times 10^{18} t^{13}+1.72697\times 10^{19} t^{14}+2.43501\times 10^{21} t^{15}\\ &-5.26563\times 10^{22} t^{16}-2.09506\times 10^{23} t^{17}+2.91622\times 10^{25} t^{18}\\ & -4.3093\times 10^{26} t^{19}-6.19092\times 10^{27} t^{20}+3.20257\times 10^{29} t^{21} +O(t^{22}) \end{align}

Here is a plot of the function $f(t)$ together with its Taylor series up to order $21$. enter image description here

The asymptotic value is $f(\infty) = -23.9832$. Using the method described in the answer and Padè [9,10] I obtain $-24.1351$. However, using the series up to order 20 and Padè [8,10], all the solutions have a sizeable imaginary part. Using up to 30 terms I obtain a slightly better numbers but then precision starts decreasing. Presumably this is also due to rounding errors given that the coefficients tend to be very large (this problem can't easily be cured).

1

There are 1 best solutions below

1
On

There exists a few methods that can be used to estimate the limit of functions at infinity when these functions are specified by series expansions with a finite or even zero radius of convergence. The problem is that these methods are hit and miss methods that won't always work or they may require lots of effort to get to even a poor estimate.

There exists, however, a very powerful method that can be used to transform a function by modifying its series expansion which then will change the limit to infinity in a predictable way. This is based on Ramanujan's Master Theorem. In the following we assume that $f(0) = 0$. We then have the following results:

If $f(x)$ has a series expansion of the form:

$$f(x) = \sum_{k=1}^{\infty}(-1)^k\frac{c_k}{k!} x^k \tag{1}$$

where the $c_k$ are the values attained by a meromorphic function of exponential type that satisfies a certain growth rate limit making this function uniquely determined by the $c_k$, then we have:

$$\lim_{x\to\infty} f(x) = -c_0$$

Note that $c_0$ is not necessarily equal to $f(0) = 0$, because it is defined by the value at zero of the analytic continuation of the $c_k$ for $k\geq 1$. And that analytic continuation is uniquely defined after certain growth rate conditions are imposed on that function.

If the function $f(x)$ is an even function then we must first substitute $x = \sqrt{t}$ to obtain a series of the above form, and the above formula for the limit then applies. Note, however, that after this transform the series may still be even, in which case this transform must be repeated, or we may have an odd series, in which case one should proceed as described below.

If $f(x)$ is an odd function, then it's convenient to define the expansion coefficients as follows:

$$f(x) = \sum_{k=0}^{\infty}(-1)^kc_k x^{2k+1}\tag{2}$$

We then have:

$$\lim_{x\to\infty} f(x) = \frac{\pi}{2}\lim_{k\to-\frac{1}{2}}(2k+1)c_{k}$$

So, in this case the limit will be nonzero if $c_k$ has a simple pole at $k = -\frac{1}{2}$.

These formulas are not directly useful unless one has an explicit analytical expressions for the expansion coefficients. In principle one could consider extrapolating to find $c_0$, but usually this doesn't work well.

We can, however, still make good use of these formula, by multiplying $c_k$ by a known meromorphic function which induces a predictable change in the limit, but which can radically change the properties of the function, making it amenable to a particular method to extract the limit at infinity. For example, we can transform a function with a finite radius of convergence to another functions with an infinite radius of convergence by dividing $c_k$ by $k!$. If (1) applies then the limit is not changed, because $0! = 1$. If (2) applies then dividing by (2k+1)! would also leave the limit invariant, while the same would be achieved in the even case by dividing by k! after substituting $x = \sqrt{t}$.

Instead of multiplying $c_k$ by an explicitly defined function, one can apply multiply by the expansion coefficients of another function whose limit to infinity is known, but not the explicit formula for the expansion coefficients. If (1) applies then we see that this will change the limit of the function into the product of the limits of the two functions.

The limitations on what type of manipulations one can do depends on the analytical properties of the analytical continuation of $c_k$ and this will not be known. But certain manipulations will then be riskier, e.g. exponentiating $c_k$ will usually be a problem, because then the restriction that the analytic continuation be of exponential type will mean that the analytic continuation after exponentiation will not be the exponetiated analytical continuation, therefore one then cannot assume that this procedure will have the effect of changing a limit of $L$ to $-\exp(-L)$.

If we can then manipulate the function and extract the limit by a suitable method, then we can also find the way it exponentially approaches the limit. as follows. If we assume that a limit of $L$ is approached for large $x$ as $f(x) = L + A\exp(- g x)+ \cdots$, then we can find $g$ by taking the derivative of the series that defines the function, then the logarithm and then we differentiate again. This then produces a series that defines a function that has $g$ as its limit to infinity.

To compute the limit to infinity, one can consider the following method which requires that $f(x)$ is monotonically increasing or decreasing. We then consider the function:

$$h(t) = f(t) - f(0)$$

to remove the constant term in the series expansion. The next step is to compute the series expansion of the inverse of $h(t)$. Mathematica has a simple command for this, "InverseSeries[S,z]" will output the series of the inverse of S expressed as powers of z, where S is defined as a series. But you can easily compute this without using computer algebra systems, by writing down the inverse series in terms of undermined coefficients. By substituting the series for $h(t)$ into the inverse series and demanding that this yields $t$, you obtain linear equations for the undetermined coefficients.

Then in case $h(t)$ is monotonic, the values reached at positive and negative infinity will be singular points of the inverse. One can then find these points using Padé-approximants. But because we're assuming that the limits are approached exponentially, taking the derivative should improve the accuracy of the Padé-approximant. The limits are then the negative and positive zeroes of the denominator with the smallest absolute values. The coefficients $g_{\pm}$ characterizing the exponential approach to the limit are then given by the residues of the Padé-approximants at these two poles.

If you try this for $f(t) = \tanh(t)$ with a finite number of terms for the series expansion, then the exact solution will be reproduced, because the derivative of the inverse is a rational function, which the Padé-approximant will reproduce exactly from only a finite number of terms of the series.

If the function $f(t)$ is not monotonically increasing or decreasing, then this method will only yield the local extremums closest to the origin, because at those points the inverse will have branch point singularities.