In proposition 3.3.1 of Non-Linear Programming by Berteskas, it is written that if $X^*$ is a regular point and minimizer of the constrained optimization problem, then there will exist unique $(\lambda^*,\mu^*)$ that will satisfy the KKT condition.
I have the following question in that regard:
- What do we mean by regular point? Is it necessary that a convex optimization problem will satisfy the regularity condition?
- I understand that it is not necessary for a convex optimization problem to satisfy the KKT condition. But if Slater's condition is satisfied, then KKT is satisfied. So what is the difference between Slater's condition and regularity condition?
Let your feasible set be defined by $$ f_i(x) \leq 0, \quad h_j(x) = 0, \text{for $i = 1, \dots, m$ and $j = 1, \dots, k$}. $$ where all functions involved are differentiable. Fix a point $x$ and denote the set of "active" inequality constraints by $$ \mathcal{I}(x) := \{ i \mid f_i(x) = 0 \}. $$
We say $x$ is regular if the collection of vectors below is linearly independent: $$ \{ \nabla h_j(x), \; \nabla f_i(x) \mid j = 1, \dots, k, \; i \in \mathcal{I}(x) \}. $$
This is also known in the literature as LICQ (linear independence constraint qualification).
In general, LICQ is a stronger condition than Slater's condition, as alluded to in this math.SE post.