Reentrant constraints in active set algorithm?

156 Views Asked by At

Problem definition

Supposing you're trying to solve a quadratic program: $$ \min_x f(x) = \frac{1}{2}x^T Q x + c^T x \\ \mbox{s.t} \, \; A x \ge b$$

Where Q is square ($n$x$n$), positive semi definite, and symmetric, and A is any $m$x$n$ matrix.

If you use an active set method, you basically choose rows of A to enforce (the "active set"), and assume the other rows of A aren't constrained and can therefore be ignored, more or less. You move along the gradient of $f(x)$ as best you can, respecting the constraints in the active set, until you find a constraint that wasn't in the active set that you're about to violate. You add that new constraint to the active set, remove any constraints in the active set that aren't actually violated by a step along the gradient, rinse, and repeat.

Question

Is there ever a situation where a constraint would be added to the active set, removed, and then need to be readded later on? Intuitively, thinking about it geometrically, matrix $A$ basically forms a (possibly unbounded) convex hull in $\mathbb R ^n$. I would expect that once we move along the gradient away from the constraint plane, so that the constraint is removed from the active set, it shouldn't be possible to end up back at the constraint plane moving in the opposite direction.

If the solver "cycles", where it doesn't actually modify the value for $f(x)$, but just trashes around adding and removing constraints from the active set, I could see a constraint getting readded that had been removed. But once you actually take a step forward along the gradient, and decrease $f(x)$ by any amount, I wouldn't expect to ever get back to that constraint plane ever again.

Is my intuitive/geometric understanding correct? It's not exactly a formal argument :P The motivation is that I'm trying to write a solver, and I'd like to be able to use the row of a constraint removed from the active set as scratch space for other calculations, without needing to keep a copy of it around.

1

There are 1 best solutions below

6
On

Constraints can leave and reenter the active set multiple times during the progress of the algorithm. I can't think of an example off the top of my head, but you can find one by generating a lot of random, small-dimensional nonnegative least squares problems and seeing what your algorithm does.

Separately, I would caution you against moving along the gradient instead of moving in the Newton direction within the subspace given by your active constraints. Gradient descent has horrible convergence properties for quadratic functions that aren't close to the identity map.