verification: Newton method convergence

562 Views Asked by At

P(x)=x$^2$-x-1 has a root of $\frac{1+\sqrt5}{2}$=1.61803398

My iterations are:
x$_0$=2
x$_1$ =$\frac{5}{3}$=1.6666
x$_2$=$\frac{34}{21}$=1.619
x$_3$=$\frac{1597}{987}$=1.618

So it took me 3 iterations to get the root. The question is, is the rate of convergence what you would expect? I was not sure what the rate of convergence would/should be so I am struggling with this.

1

There are 1 best solutions below

0
On BEST ANSWER

You should expect quadratic convergence assuming you start relatively close to a simple root, as is the case here. The root is $\phi=\frac{\sqrt5+1}2$, so let $x_n=\phi+\epsilon$ where $\epsilon$ is the error. Then $$y=x_n^2-x_n-1=\phi^2+2\phi\epsilon+\epsilon^2-\phi-\epsilon-1=(2\phi-1)\epsilon+\epsilon^2$$ $$y^{\prime}=2x-1=2\phi+2\epsilon-1=(2\phi-1)+2\epsilon$$ Then the next $x$ will be $$\begin{align}x_{n+1}&=x_n-\frac y{y^{\prime}}=\phi+\epsilon-\frac{(2\phi-1)\epsilon+\epsilon^2}{(2\phi-1)+2\epsilon}\\ &=\phi+\epsilon-\frac{(2\phi-1)\epsilon+2\epsilon^2-\epsilon^2}{(2\phi-1)+2\epsilon}\\ &=\phi+\epsilon-\epsilon+\frac{\epsilon^2}{(2\phi-1)+2\epsilon}\\ &=\phi+\frac{\epsilon^2}{(2\phi-1)+2\epsilon}\end{align}$$ So if you neglect that $2\epsilon$ in the denominator, the next error will be $$\epsilon_{n+1}=\frac1{(2\phi-1)}\epsilon_n^2$$ This is quadratic convergence. We can attempt to confirm the rate of convergence as follows: perform Newton's iteration until it converges. The step after it is less than roundoff error should be about as good as you will get. So when the error is less than about $10^{-8}$ I would take the next root $x_N$, to be the exact value. Now you have a vector of roots, $(x_0,x_1,\dots,x_{N-1},x_N)$ and you can subtract the last value assumed good from all to get $(x_0-x_N,x_1-x_N,\dots,x_{N-1}-x_N,0)$. Now throw out the last element which was automatically $0$ and take logs of the absolute values of the rest $$(\ln|x_0-x_N|,\ln|x_1-x_N|,\dots,\ln|x_{N-1}-x_N|)=(\ln|\epsilon_0|,\ln|\epsilon_1|,\dots,\ln|\epsilon_{N-1}|)$$ Now we want to model our error as $\epsilon_{n+1}=K\epsilon_n^r$. Taking logarithms, $$\ln|\epsilon_{n+1}|=r\ln|\epsilon_n|+\ln K$$ This is a first order linear difference equation for $\ln|\epsilon_n|$ and the general solution is $$\ln|\epsilon_n|=c_1\cdot r^n+\frac{\ln K}{r-1}$$ Taking differences, $$\ln|\epsilon_{n+1}|-\ln|\epsilon_n|=c_1(r-1)r^n\tag{1}$$ Taking logarithms yet again, $$\ln(\ln|\epsilon_n|-\ln|\epsilon_{n+1}|)=n\ln r+\ln(-c_1(r-1))\tag{2}$$ At long last we have a linear equation $y=mx+b$ where the slope $m=\ln r$. We can plot the values of $y=\ln(\ln|\epsilon_n|-\ln|\epsilon_{n+1}|)$ vs. $x=n$ and so find the rate of convergence $r$. Starting with $x_0=2$, convergence is reached to double precision at $n=5$ so we have $5+1-1-1=4$ points to run a linear regression through. Our fit is $m=0.6721$ and $b=0.704$ so $r=e^m=1.9584$, pretty close to $2$, as it should be. The linear fit looks something like this: figure 1

Now we can go back to that old equation $(1)$ and do a linear fit with $x=r^n$ and $y=\ln|\epsilon_n|$. Our fit is now $m=c_1=-2.1157$ and $b=\frac{\ln K}{1-r}=1.1831$, so $K=e^{b(1-r)}=0.3218$. Compare this with our prediction of $K=1/\sqrt5=0.4472$ and it's not as good as our value of $r$ was, but at least of the right order of magnitude. The linear fit looks something like this: figure 2

Just wanted to show how you can try to get the rate of convergence of theoretically and experimentally.