Consider a function $f:[0,\infty)\times \mathbb R\to\mathbb R$, and suppose that given some $a>0$, I would like to solve for $x\in\mathbb R$ satisfying \begin{align} f(\delta, x) = a. \end{align} Suppose, additionally, that $f$ is sufficiently horrible, that obtaining a solution in closed form is difficult or impossible, but that I only care about finding solutions for "small" delta.
My first instinct in finding such solutions would be to perform a formal power series expansion of $x$ in the parameter $\delta$, \begin{align} x = x_0 +x_1\delta + x_2\delta^2\cdots \end{align} plug this into the function $f$, expand the result as a formal power power series in $\delta$ as well (if this turns out to be possible); \begin{align} f(\delta, x_0+x_1\delta+x_2\delta^2+\cdots) = f_0 + f_1\delta + f_2\delta^2+\cdots \end{align} then set the right hand side equal to $a$, and obtain the following sequence of equations to determine the coefficients $x_k$ order-by-order in $\delta$: \begin{align} f_0 &= a \\ f_1 &= 0 \\ f_2 &= 0 \\ &\vdots \end{align} Is there a sense in which such manipulations with formal power series can lead to approximate solutions to the original equation? In particular, if the resulting power series for $x$ seems to have a nonzero radius of convergence in $\delta$ about $\delta = 0$, then have I obtained an approximation to a solution to the original equation?
Something similar to this has come up in a theoretical physics research problem, and this is all very mirky water for me, so any insight will be much appreciated.