Is it possible in a optimzation problem the function value converge but the variable does not converge?

89 Views Asked by At

Suppose we want to optimize $f(x)$, is it possible there exist an algorithm which could make the function $f(x)$ converge to an optimal value but can not make the variable $x$ converge? For example: there exist a region $A$ such that any $x\in A$ minimizes $f(x)$, and the algorithm makes $x$ wondering inside $A$. Is this possible? Is there any specific example?

1

There are 1 best solutions below

9
On

Sure. Take Newton's method on the function $f(x) = \frac{1}{x}$, on the domain $(0, \infty)$. Take a first estimate $x_0 = 1$, and define $$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} = x_n - \frac{\frac{1}{x_n}}{-\frac{1}{x_n^2}} = 2x_n.$$ Inductively, we see $x_n = 2^n$. Note that $f(x_n) = \frac{1}{2^n} \to 0$, the infimal value of the function, but the iterates $x_n$ fail to converge.