Why does newtons method in optimisation converge in one step for a quadratic function?

3.7k Views Asked by At

According to the wikpedia page for Newton's method in optimization, using newton's method to find $min_{x \in R} f(x)$ for a twice differentiable function $f: R \rightarrow R$, the exact extremum can be found in one step if $f$ happens to be quadratic. Why ?

2

There are 2 best solutions below

6
On

Write $f(x) = c(x - a)^2 + b$.
Notice $f'(x) = 2c(x-a)$ and $f''(x) = 2c$.
Plug into the formula: $x_1 = x_0 - \frac{2c(x_0-a)}{2c} = x_0 - (x_0 - a) = a$.

0
On

Taking a quadratic $f: \mathbb{R}^n \to \mathbb{R}, f(x)=\frac 1 2 x^T A x$ with $A$ positive definite, we have $f'(x) = Ax$ and $f''(x) = A$ (can be verified using Taylor series).

Since Newton's step starting at $x_0$ is $x_0 + p$ where $p = - f''(x)^{-1} f'(x) = - A^{-1} A x = -x$, we can see that $f'(x_0 + p)=f'(x_0 - x_0) = f'(0)=0$, so this is the global minimum after one step.