In perturbation theory, does it make sense to consider added error to "shift" the function and alter no. of roots?
That is consider e.g. $f(x)=x^2+\epsilon$. For $\epsilon=0$ there's a single root $x=0$.
If $\epsilon > 0$, then there would be no roots.
If $\epsilon < 0$, then there would be two roots.
What confuses me is that I've not seen this kind of reasoning in the perturbation theory I've read so far. Rather thay seem to treat $\epsilon$ as something that you plug in after you've done asymptotic expansion. And it spits out you an approximate value of the function near that $x(\epsilon)$.
But for example if one considers:
$$f(x(\epsilon))=(x_0+x_1\epsilon+x_2\epsilon^2+...)^2+\epsilon$$
Then I don't see how $\epsilon > 0, \epsilon < 0$ would make the solution through perturbation technique any different. Like as if perturbation does not consider the number of roots, but only approximate solution, for root.
The two simple roots of eg. $x^2 - 1 +\epsilon$ vary analytically with $\epsilon$ around $\epsilon = 0$ so you could use perturbation theory to estimate them. The roots of $x^2+\epsilon$ do not vary analytically around $\epsilon = 0$.