Is there an implicit mechanism in Newton Raphson method to ensure divide-by-zero never happens?

599 Views Asked by At

This question is in relation to Newton’s Method for optimization.

Is Newton’s Method always applicable if the function f is twice continuously differentiable? Argue with the help of the function

$f(x) = 2x^3 − 5x$ at $x = 0$

For the above question, I had computed the first-order and second-order derivative and set x=0, which triggers a divide-by-zero.

But, is this a case of bad initialization?. Or, is this case bound to happen even during optimization iterations?

1

There are 1 best solutions below

0
On BEST ANSWER

No, there is nothing that prevents that a Newton step from some random position lands in a zero of the denominator.

Statements to the convergence of the Newton method make usually the assumption that the root is simple, that is, the derivative of the denominator is safely away from zero in a neighborhood of the root. Then division-by-zero is no concern for the convergence analysis.

In the scalar case, the Newton method at multiple roots tends to undershoot, reducing the distance to the root by a factor of $1-\frac1m$ in each step. This again means that the numerator goes faster to zero than the denominator. However, any measure for the acceleration of this process like Aitkens delta-squared process, or modifications of the step length, risk to land very close to the root where the root multiplicity makes the evaluation of the function and its derivatives rather random due to floating point errors.

Multiple roots in the multivariate case can lead to rather interesting patterns, as there will be sectors with linear convergence and others with quadratic convergence towards the sectors with linear convergence. Acceleration of the linear convergence then can land again in the quadratic sectors, leading to a periodic cycle, A. Griewank did some extensive analysis of that in the 1980ies.