Convergence of the Newton's method applied to a problem

79 Views Asked by At

Problem The vector function $\textbf{x}\mapsto\textbf{f}(\textbf{x})$ of two variables is defined by

$$f_1(x_1,x_2)=x_1^2+x_2^2-2,\qquad f_2(x_1,x_2)=x_1-x_2$$

The equation $\textbf{f}(\textbf{x})=\textbf{0}$ has two solutions $x_1=x_2=1$ and $x_1=x_2=-1$. Show that the iteration converges to $(1,1)^T$ if $x_1^{(0)}+x_2^{(0)}$ is positive, and if $x_1^{(0)}+x_2^{(0)}$ is negative, then the iteration converges to the other solution.

I am trying to use a theorem that says

If $\textbf{f}(\textbf{m})=\textbf{0}$, all the second-order partial derivatives of $\textbf{f}$ are defined and continuous, and that the Jacobian matrix $J_f(\textbf{m})$ of $\textbf{f}$ at the point $\textbf{m}$ is nonsingular, then the sequence $(\textbf{x}^{(k)})$ defined by the Newton's method $\textbf{x}^{(k+1)}=\textbf{x}^{(k)}-[J_f(\textbf{x}^{(k)})]^{-1}\textbf{f}(\textbf{x}^{(k)})$; $k=0,1,2,...$ converges to the solution $\textbf{m}$ provided that $\textbf{x}^{(0)}$ is sufficiently close to $\textbf{m}$. Also the convergence is at least quadratic.

I am not sure if this will be helpful but after one iteration with $\textbf{x}^{(0)}=(x_1^{(0)},x_2^{(0)})$, you get

$$ x_1^{(1)} = x_2^{(1)} = \frac{\big(x_1^{(0)}\big)^2 + \big(x_2^{(0)}\big)^2+2}{2\big(x_1^{(0)} +x_2^{(0)}\big)} $$

1

There are 1 best solutions below

0
On BEST ANSWER

If you grind through the computations, you get $x_{n+1} = \begin{bmatrix} {2 +x_n(1)^2 +x_n(2)^2 \over 2x_n(1)+2x_n(2)} \\ {2 +x_n(1)^2 +x_n(2)^2 \over 2x_n(1)+2x_n(2)} \end{bmatrix}$.

So,if $x_0(1)+x_0(2) > 0$, then $x_n >0$ (componentwise) for all $n$ and similarly, if $x_0(1)+x_0(2) < 0$, then $x_n <0$ (componentwise) for all $n$.

Furthermore, after the first iteration, $x_n(1) = x_n(2)$.

Letting $y_n = x_n(1)$, the iteration becomes $y_{n+1} = {y_n^2+1 \over 2 y_n}$ which is the Newton iteration for solving $y^2 = 1$.

It is straightforward to show (because $y \mapsto y^2-1$ is convex) that if $y_1 >0$, then $y_2>1$ and subsequently $1 \le y_{n+1} \le y_n$. Hence $y_n$ converges to some $y$ and continuity of the iteration equation shows that $y=1$.

A similar analysis applies if $y_1 <0$, except it converges to $-1$.