The formula for Newton's iteration method (which is a zero-finding problem) is $$ x_{k+1}=x_{k} - \frac{f(x_k)}{f'(x_k)}$$
I read in my textbook that this can be also be seen as a fixed-point iteration, where the zero of this function $f$ is a fixed point for another function $g$. Tha is, something like this: $$g(x)=x - \frac{f(x)}{f'(x)}$$
My question is, why is this useful?
For example if we wanted to prove that a function $f(x)=x^2-2$ converges in the interval $[1,2]$, how could the fixed point property be useful? I don't see what the point is, and how it is different than just using Newton's method.
There is the Banach fixed point theorem, which is valid in even a more general settings. it tells us, that the iteration method converges unter certain conditions. in the real case, the condition $|g(x)| < q < 1$ with $q$ constant suffices. and this is the case for your $g$ as you defined it, therefore we know the newton method converges in that case.