How to be sure that Newtons method converge to find the optimal solution for a strictly convex one-variable function?

188 Views Asked by At

I have a function of one variable with a positive domain $(0, \infty)$. This function is strictly convex with any parameters (the second derivative is positive). I want to minimize this function.

In the examples that I have tried, I always found the optimal solution with Newton's method (with a tolerance level being not a problem). However, how can I guarantee that I will always find the optimal solution using this method for this function with different parameters? I have been looking for that answer in some books (e.g. Nonlinear Programming, Mordecai Avriel) but didn't find a clear answer. It seems that strict convexity is a necessary condition. But is it sufficient?

I would appreciate an answer with the corresponding book to dive deeper. Moreover, a numerical example would also be appreciated if possible.

Thanks.

Edit 1: As additional information, the function $f(x)$ tends to $\infty$ when $x$ approaches both $0$ and $\infty$.

1

There are 1 best solutions below

1
On

It is not sufficient, since the first (for example) Newton step could take you to the negative numbers, so outside your domain. You may have more subtle problems too, see the wikipedia article.