What Stopping Criteria to Use in Projected Gradient Descent

4.3k Views Asked by At

Suppose we want to solve a convex constrained minimization problem. We want to use projected gradient descent. If there was no constraint the stopping condition for a gradient descent algorithm would be that the gradient of function is close to zero.

But for a constrained problem the gradient at the optimal point is not necessarily (close to) zero. Now what stopping condition should we use for a projected gradient descent algorithm?

1

There are 1 best solutions below

1
On

there are actually several methods for dealing with constraint optimization problems. See Penalty Method or Augmented Lagrangian Method as two examples.

To solve the problem \begin{align} \min f( x) \\ \text{ subject to: } c_i( x) \ge 0 ~\forall i \in I \end{align} We reformulate the problem for the Penalty Method as follows \begin{align} \min F(x)_k = f(x) + {\sigma_k}_i p(c_i(x)) \end{align} With $p(c) = \min(0,c)^2$.

Then inside each of your optimization iteration, you iterate over k, increasing the penalty coefficients ${\sigma_k}_i$.Thus you hope to obtain a solution to your minimization problem $F$ which also minimizes $f$, while fullfilling the constraints.

Edit:

Note that also the constraints have to be smooth as you evaluate the gradient.