Inversion of asymptotic series

160 Views Asked by At

I have the following equation which involves a power series expansion $$ \alpha_1+A_1=-\sum_{k=1}^\infty (\alpha_{k+1}+A_{k+1})x^k/k!, $$ where $\alpha_j=O(1)$ and $A_j=O_p(n^{-1/2})$, $j\in\{1,2,\ldots\}$ and $n\rightarrow\infty$. I'd like to find the inverse series of $x$ as a function of $\alpha_1+A_1$. In wikipedia it is pointed the solution to this problem. However, I have second thoughts about the validity of this result in my case because I know that when $\alpha_1=0$ then $x=O_p(n^{-1/2})$, otherwise $x=O(1)$. It is clear that in the first case the right hand-side of the equation converges to $0$ as $n\rightarrow\infty$, whereas in the second case it diverges. Bottom line is: does the solution pointed in Wikipedia hold when the power series is not convergent? If not, could you please tell me if it possible at all to find a valid series expansion for $x$ and, in case the answer is positive, how to?

1

There are 1 best solutions below

3
On

I guess this is going to be a bit messy. The way in which $x$ depends on $n$ is for the first part not important.

Given a function $h(x)$ and a point $x_0$ we can make a Taylor expansion in $\delta \equiv x-x_0$ $$ h(x)=h(x_0) + \sum_{k \geq 1} \frac{h^{(k)}(x_0)}{k!} \delta^k $$ If we define $y \equiv h(x)$ and in particular $y_0 = h(x_0)$, we can also write the inverse function $x = h^{-1}(y) \equiv g(y)$ with $g(y_0)=x_0$ and like-wise make a Taylor expansion about $y_0$ in $\epsilon \equiv y-y_0$. $$ g(y)=g(y_0) + \sum_{l \geq 1} \frac{g^{(l)}(y_0)}{l!} \epsilon^l. $$ Since $h(x)-h(x_0)=y-y_0=\epsilon$ and $g(y)-g(y_0)=x-x_0=\delta$, we find $$ \epsilon = \sum_{k \geq 1} \frac{h^{(k)}(x_0)}{k!} \delta^k $$ $$ \delta = \sum_{l \geq 1} \frac{g^{(l)}(y_0)}{l!} \epsilon^l $$ and we can substitute the second in the first to obtain $$ \epsilon = \sum_{k \geq 1} \frac{h^{(k)}(x_0)}{k!} \left( \sum_{l \geq 1} \frac{g^{(l)}(y_0)}{l!} \epsilon^l \right)^k $$ Expanding this result we find that each group of terms of the same order in $\epsilon$ results in an equation that enable us to relate the derivatives $h^{(k)}(x_0)$ and $g^{(l)}(y_0)$: (I drop the arguments for convenience) $$ {\cal O}(\epsilon) : h^{(1)} g^{(1)} - 1 = 0 $$ $$ {\cal O}(\epsilon^2) : h^{(1)} g^{(2)} + h^{(2)} \left(g^{(1)}\right)^2 = 0 $$ $$ {\cal O}(\epsilon^3) : h^{(1)} g^{(3)} + 3 h^{(2)} g^{(1)} g^{(2)} + h^{(3)} \left(g^{(1)}\right)^3 = 0 $$ which can uniquely be solved giving: $$ g^{(1)} = \frac{1}{h^{(1)}} $$ $$ g^{(2)} = -\frac{h^{(2)}}{\left( h^{(1)} \right)^3} $$ $$ g^{(3)} = \frac{3 \left( h^{(2)} \right)^2 - h^{(1)} h^{(3)}}{\left( h^{(1)} \right)^5} $$ The solutions are analytic, but a program like Mathematica is advisable for higher order terms. Note that the solution requires $h^{(1)}(x_0) \neq 0$. I also implicitly assumed that we restrict $\delta$ and $\epsilon$ to be in the domains where the Taylor series converge.

In order for this approach to be able to solve $h(\bar{x})=0$, it is required that $\bar{x}$ lies within the convergence domain of the expansion about $x_0$. We then have $$ \bar{x}=g(0)=g(y_0) + \sum_{l \geq 1} \frac{g^{(l)}(y_0)}{l!} (-y_0)^l $$ and hence also $-y_0$ should be in the convergence domain of the expansion of $g(y)$ about $y_0$.