Suppose for $t$ in some neighborhood $(0,\delta)$, we define $s>0$ via $$ \frac{a_2}{2!}t^2+\frac{a_3}{3!}t^3+\cdots=-s^2 $$ where $\{a_2,a_3,\ldots\}$ is such that $a_2<0$ and the LHS above converges. Then, why is it true that $$ t=\left\{\frac{-2}{a_2}\right\}^{1/2}s+O(s^2)? $$ This appears as equation (2.27) in Murray (1984)'s Asymptotic Analysis. How did the author obtain this result? I looked at \begin{align*} \frac{t-[-2/a_2]^{1/2}s}{s^2}&=\frac{t}{-(\frac{a_2}{2!}t^2+\frac{a_3}{3!}t^3+\cdots)}-\frac{[-2/a_2]^{1/2}}{s}\\ &=\frac{1}{-(\frac{a_2}{2!}t+\frac{a_3}{3!}t^2+\cdots)}-\frac{[-2/a_2]^{1/2}}{s} \end{align*} but couldn't complete the justification.
Solving an asymptotic equation
262 Views Asked by yurnero https://math.techqa.club/user/yurnero/detail AtThere are 2 best solutions below
Another way is to consider $\frac{a_2}{2!}t^2+\frac{a_3}{3!}t^3+\cdots=-s^2 $ as an iteration $\dfrac{-s^2}{t^2} =\frac{a_2}{2!}+\frac{a_3}{3!}t+\cdots $ or $t^2 =\dfrac{-s^2}{\dfrac12 a_2+\dfrac16 a_3 t+\cdots} $.
With a first estimate of $t^2 = 0$, this gets $t^2 =\dfrac{-s^2}{\dfrac12 a_2} =\dfrac{-2s^2}{ a_2} $.
Putting this in, we get
$\begin{array}\\ t^2 &=\dfrac{-s^2}{\dfrac12 a_2+\dfrac16 a_3 \sqrt{\dfrac{-2s^2}{ a_2}}+O(s^2)}\\ &=\dfrac{-s^2}{\dfrac12 a_2(1+O(s))}\\ &=\dfrac{-2s^2}{ a_2}(1+O(s))\\ \text{so}\\ t &=\sqrt{\dfrac{-2s^2}{ a_2}}\sqrt{(1+O(s))}\\ &=\sqrt{\dfrac{-2s^2}{a_2}}(1+O(s))\\ &=\sqrt{\dfrac{-2s^2}{a_2}}+O(s^2)\\ \end{array} $
These kind of results always need to be checked, so let's see what we get.
First, note that the estimate gives the cruder approximation $t = O(s)$. This is useful because we want to use the crudest approximation that will give the same error terms everywhere.
The crude estimate gives $\frac{a_3}{3!}t^3+\cdots =\frac{a_3}{3!}s^3+\cdots =O(s^3) $ and the more accurate estimate gives $\frac12 a_2t^2 =\frac12 a_2 (\dfrac{-2s^2}{ a_2}(1+O(s))) =-s^2(1+O(s)) =-s^2+O(s^3) $.
Adding these, $\frac12 a_2t^2+\frac{a_3}{3!}t^3+\cdots =-s^2+O(s^3) $, which is what we want.
By looking at the equality, on the LHS (left-hand side) we have the smallest power of $t$ being $t^2$, while on the RHS we have $s^2$. Assuming that $t$ can be written as a power series of $s$ (convergent and/or asymptotic), we immediately notice that the first power of $s$ in the series $t=\sum \alpha_k s^k$ must be $k=1$ (the $k=0$ term would be a constant that does not depend on $s$ in the LHS, while there is no such constant in the RHS of the original equation). Hence, we have $t=\sum_{k=1}^\infty \alpha_k s^k$. Now, by equating the coefficients of the same powers of $s$ on the LHS and RHS, we immediately obtain the result for the first term, $\alpha_1$.
I believe this is a somewhat of a standard method for such problems, and is similar to methods encountered in the context of perturbation theory https://en.wikipedia.org/wiki/Perturbation_theory or somewhat similar to the method of dominant balance https://en.wikipedia.org/wiki/Asymptotic_analysis#Method_of_dominant_balance.