Given a second-order linear ordinary differential equation, \begin{equation} \epsilon^2 \frac{d^2y}{dx^2} = Q(x) y(x), \tag{1}\label{1} \end{equation} where $\epsilon$ is regarded as a small positive number, a typical explanation of the WKB method (e.g. Ch. 10 of Bender and Orszag, Advanced Mathematical Methods for Scientists and Engineers, 1978) starts with writing $y(x)$ as \begin{equation} y(x) = \exp\left(\frac{1}{\delta}S(x,\delta)\right), \tag{2}\label{2} \end{equation} and assuming that $S(x,\delta)$ has an asymptotic series in $\delta$, \begin{align} S(x,\delta) \sim& \sum_{n=0}^{\infty} \delta^n S_n(x) & \text{ as } \delta\rightarrow&0+. \tag{3}\label{3} \end{align} Then, by substituting eqs. (\ref{2}) and (\ref{3}) into eq. (\ref{1}) and dividing both sides by $\exp(S/\delta)$, it is claimed that \begin{align} \frac{\epsilon^2}{\delta^2} \left[\sum_{n=0}^{\infty} \delta^n \frac{d S_n}{dx}\right]^2 +\frac{\epsilon^2}{\delta} \sum_{n=0}^{\infty} \delta^n \frac{d^2 S_n}{dx^2} \sim& Q(x) & \text{ as } \delta\rightarrow&0+. \tag{4}\label{4} \end{align} From here, the argument proceeds that we can set $\delta = \epsilon$ from dominant balance, and that we can equate the like powers of $\epsilon$ on both sides to yield differential equations for the coefficient functions $\{S_n(x)\}$ as \begin{align} \left(\frac{dS_0}{dx}\right)^2 =& Q(x), \tag{5}\label{5}\\ 2 \frac{dS_0}{dx}\frac{dS_1}{dx}+\frac{d^2S_0}{dx^2}=&0, \tag{6}\label{6}\\ \dots \end{align} and these equations are solved one after another.
Here is my question. How is it justified to differentiate the asymptotic series in eq. (\ref{3}) to derive eq. (\ref{4})?
I wondered that maybe we can use the fact that $y(x)$ is a solution of eq. (\ref{1}) in some way, but I don't find a way so far.
I read that the derivatives of both sides of an asymptotic relation does not always construct another asymptotic relation, and I guess that even if \begin{align} f(x,\epsilon)\sim& g_0(x) + g_1(x) \epsilon + g_2(x) \epsilon^2 +\cdots& \text{ as } \epsilon\rightarrow& 0, \tag{7}\label{7} \end{align} uniformly in some domain of $x$, it does not necessarily follow that \begin{align} \frac{df}{dx}(x,\epsilon) \sim& \frac{dg_0}{dx}(x) + \frac{dg_1}{dx}(x)\epsilon + \frac{dg_2}{dx}(x) \epsilon^2 +\cdots& \text{ as }\epsilon \rightarrow& 0. \tag{8}\label{8} \end{align} Or, does it follow (under some conditions)?
In the context of asymptotic series, the `$\sim$'-symbol means the following:
Fix $N \in \mathbb{N}$. There exists a $\delta_0 > 0$ such that for all positive $\delta < \delta_0$, we have the equality \begin{equation} S(x,\delta) = \sum_{n=0}^N s_n(x) \delta^n + \mathcal{O}(\delta^{N+1}), \tag{*} \end{equation} which is uniformly valid (i.e. for all $x$ in the domain in question).
If we can find a domain for $x$ and a $\delta_0>0$ such that the above holds for all $N\in \mathbb{N}$, then we use the notation \begin{equation} S(x,\delta) \sim \sum_{n=0}^\infty s_n(x) \delta^n. \end{equation}
Note that this not mean that the series converges in the classical sense. Moreover, we can add a term like $e^{-x/\delta}$ to the original function without changing the series approximation at any order. For more information on asymptotic series, see e.g.
F. Verhulst, Methods and Applications of Singular Perturbations, Springer, 2005.