Newton's method : Show inequality

128 Views Asked by At

I have written the formula of Newton's method to appoximate $a^{1/n}$, which is $$x_{k+1}=\left (1-\frac{1}{n}\right )x_k+\frac{a}{nx_k^{n-1}}$$

Now I want to show the inequality $x_{k+1}-a\leq \left (1-\frac{1}{n}\right )(x_k-a)$.

Since we subtract at the left side $a$ instead of $a^{1/n}$ we cannot consider it as the error, right?

I achieved to show the inequality but for $a\geq 1$: $$x_{k+1}=\left (1-\frac{1}{n}\right )x_k+\frac{a}{nx_k^{n-1}} \leq \left (1-\frac{1}{n}\right )x_k+\frac{a}{n\left (a^{1/n}\right )^{n-1}} = \left (1-\frac{1}{n}\right )x_k+\frac{a}{na^{1-\frac{1}{n}}} = \left (1-\frac{1}{n}\right )x_k+\frac{a^{\frac{1}{n}}}{n} $$ Then subtracting at both sides "$a$" we get $$x_{k+1}-a\leq \left (1-\frac{1}{n}\right )x_k+\frac{a}{n}-a=\left (1-\frac{1}{n}\right )x_k-\left (1-\frac{1}{n}\right )a=\left (1-\frac{1}{n}\right )(x_k-a)$$ We have that $x_k\geq a^{1/n}$.

Could you give me a hint for the case $a<1$ ? Or is there an other way to show the inequality without taking cases for $a$ ?

2

There are 2 best solutions below

0
On

Modified to give a better estimate for the error and avoid the need for $n$ to be an integer.


Let $x_k$ be the $k$th iteration of the Newton Raphson procedure applied $x^n = a$. Let us define $\xi = a^{1/n}$ and assume $x_k \geqslant \xi$. We also introduce the symbol $\delta_k = x_k - \xi \geqslant 0$ and $\rho_k = \delta_k/\xi$, being the error and relative error respectively.

Then, calculating $\rho_{k+1}$, we have \begin{align} \rho_{k+1} &= \frac{1}{\xi}\Big( x_{k+1}-\xi \Big) \\ &= \frac{n-1}{n}x_k + \frac{a}{nx_k^{n-1}} - \xi \\ &= \rho_k - \frac{1}{n\xi}\Big ( x_k - \frac{\xi^n}{x_k^{n-1}} \Big) \\ &= \rho_k - \frac{1+\rho_k}{n} \Big(1 - (1+\rho_k)^{-n} \Big) \tag{1}\label{E1} \end{align} Now concentrate on the second term. In general, we write if $f(\theta) = \big(1+\theta\big)\big(1-(1+\theta)^{-n}\big)$ where $n \geqslant 1$ and $\theta \geqslant 0$ we can claim that \begin{aligned} \theta \leqslant f(\theta) \leqslant n\theta. \end{aligned} The left inequality is straightforward. For the right observe $f'(\theta) = 1 +(n-1)(1+\theta)^{-n}$, $f'(0) = n$ and $f'(\theta)$ is decreasing. Using the fact $f(\theta) = 0$ and integrating we can then see that $f(\theta) \leqslant n\theta$.

This now gives us what we need. From the right hand side of the inequality we see that $\rho_{k+1} \geqslant 0$ so that the error remains positive. The left hand side of the inequality shows, $\rho_{k+1} \leqslant \Big(1-\frac{1}{n}\Big)\rho_k$, which is the required relation (expressed in relative terms). So the error converges monotonically to $0$, and $x_k \searrow \xi$. This is as far as the question went.


We can use \eqref{E1} to strengthen the error analysis. If $\rho_k < 3/(n+1)$ then we can apply the general binomial theorem to say \begin{aligned} \rho_{k+1} &= \rho_k - \frac{1}{n}\Big( 1 +\rho_k - 1 + (n-1)\rho_k - \frac{(n-1)n}{2}\rho_k^2 + \frac{(n-1)(n)(n+1)}{3!}\rho_k^3 - \cdots \Big)\\ &=\frac{n-1}{2}\rho_k^2-\frac{(n-1)(n+1)}{3!}\rho_k^3+\cdots \end{aligned} The limit on the size of $\rho_k$ implies we have a convergent alternating series with reducing terms, so that the sum is less than the first term, $$ \rho_{k+1} \leqslant \frac{n-1}{2} \rho_k^2 $$ It means we nearly double the number of significant figures on each iteration. This also leaves a way forward to measure the error even more exactly if you should need.

0
On

Let $$g(x) = \left(1 - \frac{1}{n}\right)x + \frac{a}{nx^{n - 1}}.$$ Then the iteration satisfies $$x_{k + 1} = g(x_k).$$ Let $c = a^{1/n}$. Let $e_{k} = x_{k} - c$ be the sequence of errors. Then we know $g(c) = c$ by the construction of Newton's method. Hence by the Lagrange remainder theorem, $$x_{k + 1} = g(x_{k}) = g(c) + g'(\xi_k)(x_k - c) = c + g'(\xi_k)(x_k - c),$$ where $\xi_{k}$ is between $c$ and $x_k$. Hence $$e_{k + 1} = g'(\xi_k)e_{k}.$$ The power rule reveals that $$g'(x) = \left(1 - \frac{1}{n}\right)\left(1 - \frac{a}{x^n}\right).$$ Assume that $n \in \mathbb{R} > 1$ and that $x_{k} > c$. Then we have that $e_{k} > 0$ and that $$x_{k} > c \implies \xi_k > c \implies \xi_{k}^n > c^n = a \implies 0 < g'(\xi_k) < \left(1 - \frac{1}{n}\right) \implies 0 < e_{k + 1} < \left(1 - \frac{1}{n}\right)e_{k}$$ as desired.

Note that to solve $f(c) = 0$, Newtons method constructs $g(x) = x - \frac{f(x)}{f'(x)}$ and tries to find a fixed point of $g$ since $f(c) = 0 \iff g(c) = c$. Note that $g'(c) = 0$, and $g''(c) = \frac{f''(c)}{f'(c)}$. Thus if you expand the Taylor series one more term, you get that when it converges, Newton's method is quadratically convergent if $f'(c) \neq 0$ and $f^{(3)}$ is continuous at $c$: $$e_{k + 1} = g'(c)e_{k} + \frac{g''(\xi_k)}{2}e_{k}^2 = \frac{g''(\xi_k)}{2}e_{k}^2 \implies \frac{e_{k + 1}}{e_{k}^2} \to \frac{g''(c)}{2} = \frac{f''(c)}{2f'(c)} \text{ as $k \to \infty$}.$$