Are step sizes in quadratic programming solvers analytically exact?

54 Views Asked by At

In this paper (DOI link), Goldfarb and Idnani describe an algorithm for solving a certain subset of quadratic programs. This algorithm (or a very similar one) is implemented in the quadprogpp package for R. On page 29 of their paper, they show a few possible solution paths leading to a point of optimality given three inequality constraints. It appears that the solver steps exactly onto that solution. This seems to contrast with gradient-based solvers, which have a specified tolerance and a (potentially dynamic) step size and iterate toward a solution, but often converge to a point that is suboptimal but satisfies said tolerance metric.

My questions are the following:

  1. Does the quadratic solver described in the paper (and implemented in the package) actually find the exact solution (subject to numerical imprecision, of course), or is it just getting to a solution that's within some tolerance of the true optimum?

  2. If the answer to the first question is the latter (inexact solution, within tolerance of true optimum), how would I go about adjusting the tolerance and/or step size so as to make the search more granular? I understand that this may be slightly off-topic since it crosses into software/programming, but any suggestions would be much appreciated.

  3. If the answer to the first question is the latter, what is the stopping criterion, and how can it be adjusted?