Explanation Needed for optimization algorithm

23 Views Asked by At

In a research paper I saw an algorithm that maximizes a function $f(x,y)$. The algorithm use iterations to achieve the maximization goal. The iterations are repeated until the normalized mean difference is not less than a certain tolerance value e.g. until $\frac{|f(x,y)^{k}-f(x,y)^{k-1}|}{f(x,y)^{k-1}}\leq \eta$ (where $\eta$ is an arbitrarily small value). My question is what if the $\frac{|f(x,y)^{k}-f(x,y)^{k-1}|}{f(x,y)^{k-1}}\leq \eta$ for $k=2$. Although the maximum may not be achieved after two iterations but still the algorithm will stop running after two iterations. How does this kind of constraint makes sure that the maximization is achieved. Any logical explanation will be very helpful. Thanks in advance.

1

There are 1 best solutions below

0
On

Unless you know what your maximum value is, it is not easy to stop optimization algorithms. Your criterion is a typical stopping criterion; the relative error is small. In practice, $\eta$ is $10^{-3}$, or $10^{-6}$ etc, depending on how fast this generally works. (First order methods will need a larger $\eta$.)