Consider the optimization problem
$$ \min_{x \in \mathbb{R}^2} x^{\top} P x + q^{\top} x$$
subject to:
$$ A x = b, \ x \in X, \ x_1^2 + x_2^2 = 1$$
where $X$ is compact and convex.
Then consider the optimization problem
$$ \min_{x \in \mathbb{R}^2} x^{\top} P x + q^{\top} x - \lambda ( x_1^2 + x_2^2)$$
subject to:
$$ Ax=b, \ x\in X, \ x_1^2 + x_2^2 \leq 1$$
I am wondering if for $\lambda>0$ sufficiently large the optimal solution of the second problem approximates arbitrarily close the optimal solution of the first one. If so, I wonder if there exists a large, finite, $\lambda$ such that the two solutions coincide.
"I am wondering if for $\lambda$ sufficiently large the optimal solution of the second problem approximates arbitrarily close the optimal solution of the first one."
In general, the answer is no.
Consider the problem $$ \min_{x \in \mathbb{R}^2} x^{\top} P x + q^{\top} x ~~~~{\rm s.t.}~~ A x = b, \ x \in X. $$ If $x^\star$ is the solution and $(x_1^\star)^2 + (x_2^\star)^2 < 1$, then it is impossilbe to "force" $x_1^2 + x_2^2 = 1$ by introducing an Langrange-multiplier $\lambda$. In this case, introducing $\lambda$ would lead to even smaller values for $x_1^2 + x_2^2$.
However, if $(x_1^\star)^2 + (x_2^\star)^2 > 1$, an appropriate Lagrange-multiplier leads to $x_1^2 + x_2^2 = 1$.