Iteratively Reweighted Least Squares: termination criterion

198 Views Asked by At

I'm implementing Iteratively Reweighte Least Squares with the algorithm described on

https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares

What I wonder now is how many iterations I should perform. I guess when $\left| \boldsymbol\beta^{(t)} - \boldsymbol\beta^{(t-1)} \right| < \boldsymbol\epsilon$ I can terminate but how to chose $\boldsymbol\epsilon$ ? maybe somehow relative to $\left| \boldsymbol\beta^{1} - \boldsymbol\beta^{0} \right|$

what's a good approach here? I'm writing some performance critical code so I'm trying to stop somewhere around the point of diminishing returns

1

There are 1 best solutions below

0
On

In my experience, there are a lot of varying opinions on what is a "good" stopping condition. So, you've asked a really good question, for which I'm not aware of a simple "one size fits all" answer. Some reasonable measures people use are terminating when:

  1. The gradient of the objective function gets small,

  2. the sequence of iterates begins to stagnate (i.e. the one you've listed),

  3. when the function value begins to stagnate, or

  4. some combination of the above.

There are also relative error versions of the above, e.g. instead of terminating on small $|\beta^{(t)}-\beta^{(t-1)}|$, one may terminate when the relative change $|\beta^{(t)}-\beta^{(t-1)}|/|\beta^{(t-1)}|$ becomes "small". There are pros and cons of each of these methods. For instance, beware of only using rule (1) because there are instances where the gradient has a very small norm, while simultaneously the iterate is arbitrarily far from the minimizer.

Choosing a "good" threshold $\epsilon$ varies widely depending on the problem application. While some folks use trial and error to determine reasonable default values, I think the general consensus is that choosing $\epsilon$ should be left to the engineer/user.