The context: suppose I have $n$ equations $f_1, \cdots, f_n$ in $n$ variables $x_1, \cdots, x_n$, i.e. I would like to solve $f_i(x_1, \cdots, x_n) = 0$ for all $1 \leq i \leq n$. Suppose further the system $\{f_i \mid 1 \leq i \leq n\}_{\epsilon, \delta}$ is parameterised $\epsilon,\delta$ sufficiently small. The functions $f_i$ are polynomials in $x_1, \cdots, x_n, \delta$ with coefficients depending only on $\epsilon$. For instance, $f_1$ might look like $$1 + \delta + \sin(\epsilon)x_1(1 - \delta) + \sin(2\epsilon)x_1^2.$$ I'm interested in the asymptotic behaviour of the solutions to this system as $\epsilon, \delta \rightarrow 0$. But to solve the system in full generality might be quite difficult, depending on the coefficients which depend on $\epsilon$. However, I can take a series expansion of the coefficients in $\epsilon$, say a linear expansion (or a quadratic), solve this simplified system, then take a series expansion of the solution in $\delta$.
So if we have only one equation, say $f_1 = 1 + \delta + \sin(\epsilon)x_1^2 = 0$, then a linear approximation on the $\epsilon$-dependent terms gives $f_1 = 1 + \delta + \epsilon x_1^2 = 0$. We then solve this to get $x_1$ as a function of $\delta, \epsilon$, then take a series expansion in $\delta$.
I know so far that this strategy does not work in full generality, in the sense that approximate solutions arrived at by approximating the coefficients do not translate necessarily to solutions to the original system. For instance, take $$ x^2 - 2\cos(\epsilon)x + 1 = 0.$$ The discriminant of this quadratic is $4\cos^2\epsilon - 4 < 0$ for all $\epsilon > 0$, yet a linear approximation gives $$ x^2 - 2x + 1 = 0$$ which has a repeated root at $x = 1$. Even if I extend to a quadratic equation, this works because the discriminant then is $4(1 - \epsilon^2/2) - 4 < 0$, but I can just replace $\cos(\epsilon)$ by a function whose series expansion is $1 + \mathcal{O}(x^3)$.
Here's the question: if I take a sufficiently large expansion such that all coefficients dependent on $\epsilon$ have at least some $\epsilon$ dependence after expansion, i.e. in this $\cos$ case take at least the quadratic expansion, and if one coefficient is $1 + \epsilon^{99} + \mathcal{O}(\epsilon^{100})$ then we expand to order 99, does this strategy work? The idea is that continuity should fix these cases, and I can't think of a counterexample in this case.