Newton's method and the inverse function theorem

1.1k Views Asked by At

I'm trying to figure out from lectures summaries the following:

Let $f:\mathbb{R}^n\rightarrow \mathbb{R}^n$, $f\in C^1(U),\ U\subset \mathbb{R}^n$, and let $x_r\in U$ such that $f(x_p)=0$ and $J_f(x_p)$ is invertible.
Let $(x_n)_{n=1}^\infty$ be defined as follows:
$x_1\in U,\ \forall n\geq 1 \ x_{n+1}=x_n+\left[J_f(x_n) \right]^{-1}f(x_n)$

Using the inverse function theorem, prove that $\exists r>0:\ \forall x_1\in B_r(x_p)\ x_n\rightarrow x_p$

My approach: from the inverse function theorem we know that $\exists r>0$ such that $f|_{B_r(x_p)}$ is bijective and $J_{f^{-1}} (y)=J^{-1}_f (f^{-1} (y))$.
I believe that general direction is evaluating the linear approximation of $0=f(x_p)=f(x_n)+J_f (x_n) (x_n-x_p) +o(x_n-x_p)\\ x_p=[J_f(x_n)]^{-1} f(x_n)+x_n+o(x_n-x_p)$

I haven't justified that $[J_f(x_n)]^{-1}$ exists, I think that's where the invert function theorem should pop up, and I'm not sure how to prove the convergence of the series.