Say we were to find the roots of the equation $x^3+x+1$. So we get: $$x^3+x+1=0$$ $$x(x^2+1)=-1$$ $$x=\frac {-1}{x^2+1}$$ On repeatedly putting the value of x as this fraction in the denominator we get the following fraction for x: $$x=\frac{-1}{\frac{1}{(x^2+1)^2}+1}$$
And so on until we get a continued fraction. I have approximated this continued fraction both by hand and some python code and found it gives an accurate result upto 20 decimal places. However, my question is that, can this technique be applied, in general, to any polynomial,like $x^5-4x^3+4x^2-9x+12$? Because I have not been able to. I also have had hard luck trying to find information of this technique of root finding though I seem to find fixed point iteration, which seems to be similar but I am explicitly looking for continued fractions.
It is true that, in this particular case, the iteration converges to the real root of $x^3+x+1$ for suitable initial values.
But this does not hold true in general, even for cubics. Try for example $x^3+x\color{red}{+3}$, instead. Iterating $x=\frac {\color{red}{-3}}{x^2+1} = \ldots$ with similar initial values will soon turn into an oscillating sequence between $\frac{-3 \pm \sqrt{5}}{2}$, while neither value is a root of $x^3+x+3$.
Technically, it can be applied, but with the same caveats and only under similar assumptions:
the real root of $x^3 + x+1$ was $\approx -0.68233$ i.e. it had an absolute value $\lt 1\,$;
the initial value of the iteration was chosen close enough to the root being targeted e.g. $x = 0$.
The real root of $p(x) = x^5-4x^3+4x^2-9x+12$ is $\approx -2.70474 \in (-3,-2)$, as can be verified by $p(-3) \lt 0 \lt p(-2)$. To "shift" the root in the range of sub-unit absolute values, consider instead the polynomial $q(x) = p(x-3) = x^5 - 15 x^4 + 86 x^3 - 230 x^2 + 264 x - 60$ which will have a root in $(0,1)$ around $\approx -2.70474 + 3 = 0.29526$. The iteration $x_{n+1}=\frac{60}{x^4 - 15 x^3 + 86 x^2 - 230 x + 264}$ with $x_0=0$ does in fact converge to that root.
It should be noted that the above is not a "recipe" guaranteed to always work. It's just a plausible heuristic that improves the chances that such iteration might work in cases like these. A formal breakdown of when $x_{n+1}=\frac{c}{f(x_n)}$ converges to a particular root of $xf(x)-c$ is beyond the scope of the question here.
[ EDIT ] $\;$ The proposed iteration is equivalent to finding a root $x$ of polynomial $g(x) = x f(x) - c$ by iterating $x_{n+1} = \frac{c}{f(x_n)}$. This is a fixed-point iteration for the function $\frac{c}{f(x)}$ and a sufficient condition for it to converge to a root of $g(x)$ is $\left|c f'(x)\right|\lt \left|f^2(x)\right|$ in the neighborhood of the root, by Banach's fixed point theorem. This particular iteration does not appear to have been well studied, since better polynomial root-finding algorithms are known to exist, such as the Newton-Raphson method or the Halley method.
Under additional assumptions, the iteration $x_{n+1} = \frac{c}{f(x_n)}$ is in fact similar to the Newton-Raphson method for $g(x) = xf(x) - c$:
$$ \require{cancel} x_{n+1} = x_n - \frac{g(x_n)}{g'(x_n)} = x_n - \frac{x_nf(x_n) - c}{f(x_n)+x_n f'(x_n)} = \frac{\cancel{x_nf(x_n)}+x_n^2f'(x_n) - \cancel{x_nf(x_n)}+c}{f(x_n)+x_n f'(x_n)} $$
When $\,|x^2 f'(x)| \ll |c|\,$ and $\,|xf'(x)| \ll |f(x)|\,$ this reduces to $\,x_{n+1} \approx \frac{c}{f(x_n)}\,$, which also justifies the heuristic that the iteration has a better chance to work for small magnitudes $|x|$ of the root.