Is there a possibility to invert a polynomial locally? I've got the following problem, concerning control theory: Imagine a ideal amplifier with a feedback loop:

Let firstly A be not dependent on frequency. If A is linear we can write the following: $v_o = (v_1-v_2) \cdot A$ where $v_2=v_o \cdot F$ hence $v_o= (v_1-F \cdot v_o)A$ since A is linear we obtain: $v_o=v_1 \cdot A- v_o \cdot A \cdot F$ hence $v_o-AF v_o=v_1 \cdot A \rightarrow v_o=v_1\frac{ A}{1+A \cdot F}$
But now let A a polynomial, say of degree 3 and no offset: $A(x)= a_1 x+a_2 x^2+ a_3 x^3$. Then the equation above does not hold anymore. But we can assume we can invert the polynomial locally around zero, then the equation becomes (after some karate): \begin{equation} v_o= A(v_1-F \cdot v_o)\\ A^{-1}(v_o)= v_1 -F \cdot v_o \\ \text{assuming A is invertible locally} \\ F \cdot v_o + A^{-1}(v_o) = v_1\\ (F \cdot I + A^{-1})(v_o) = v_1 \\ \text{(is that correct?)}\\ v_o = (F \cdot I + A^{-1})^{-1}(v_1)\\ \end{equation}
Lastly we assume that the input signal $v_i$ is of the form $sin(\omega x)$ and we want to know how much the output sine is distorted (means the percentage of higher harmonics like $sin(2 \omega x)$ etc.) If we had found the operator $B = (F \cdot I + A^{-1})^{-1}$ we could run the input signal through it and do an FFT to find out about the distortion. We assume the coefficients $a_1 ... a_3 $to be reasonably arbitrary (so I can play around with them if possible), as is F. How do I find the operator $B$? Is this somehow solvable by functional-analysis (since it is a non linear operator we cannot use the Neumann-series, or can we?)
I found the answer to my specific problem: the keywords are Taylor series and Implicit Function Theorem. Imagine the x the input, y the output, f a non-linear function: \begin{equation} y= f(x-b \cdot y)\\ \end{equation} (this is as above, just renamed). Then we can do the following: \begin{equation} 0= f(x-b \cdot y)-y\\ F(x,y(x))=0 \end{equation} Means we have an implicit function. The implicit function theorem allows us under some assumptions that we can make to find the derivative of $y$ in a given point, we choose (x,y)=(0,0). So we have: \begin{equation} y^\prime = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} \end{equation} As $f(x)$ we have $a_1x+a_2x^2+a_3x^3$ so substituting $x$ by $x-by$ and differentiating we obtain (after setting x to zero in the counter and y to zero in denominator!): \begin{equation} -\frac{a_1-2a_2by+3b^2y^2a_3}{-a_1b-2a_2bx-3x^2b-1} \end{equation} Furthermore we set the remaining x's and y's to zero (which we could have done already) and obtain: \begin{equation} \frac{a_1}{a_1b+1} \end{equation} for $y'$ in the point (0,0). Further differentiating gives higher derivatives of y and we can use them as coefficients of a Taylor polynomial. This is somehow tiresome but will lead to exactly what I wanted. Thanks anyway.