Holding the constraints of a constrained optimization when transformed into unconstrained optimization

117 Views Asked by At

Suppose there is a constrained convex optimization problem as shown below

\begin{equation} \begin{aligned} & \min\limits_{\mathbf{x}} & & f(\mathbf{x}) \\ & \text{s.t.} & & g(\mathbf{x})=\mathbf{c} \end{aligned} \end{equation}

Using methods such as Augmented Lagrangian method it can be rewritten to a unconstrained convex optimization problem as

$$ \min\limits_{\mathbf{x}} f(\mathbf{x}) -\mathbf{y}^T(g(\mathbf{x})-\mathbf{c})+\frac{\rho_y}{2}||g(\mathbf{x})-\mathbf{c}||^2_2 $$

where $\mathbf{y}$ is the Lagrange multiplier and $\rho_y$ is the weight for the penalty term.

When solving this unconstrained convex optimization problem, the result wanted is $g(\mathbf{x}) - \mathbf{c}$ as close to zero as possible (this constraint must hold) while minimizing $f(\mathbf{x})$, however the solver is not afraid of making the term $-\mathbf{y}^T(g(\mathbf{x})-\mathbf{c})+\frac{\rho_y}{2}||g(\mathbf{x})-\mathbf{c}||^2_2$ larger as long as it can bring $f(\mathbf{x})$ down and minimize the overall cost, and the resulted $\mathbf{x}$ is not usable since the hard constraint $g(\mathbf{x}) - \mathbf{c}$ is broken.

What is the right way to overcome this problem?