The goal of this question is to understand the minimal set of equations or inequalities needed for solving an optimization problem in primal or dual form.
Suppose we have a convex constrained optimization problem in which we know that strong duality holds. We are going to find what the primal solution and dual solution is. We use KKT confitions, since now they are sufficient.
The point is it looks like neither for finding the primal solution nor the dual solution we need to actually use "complementary slackness". We only need it for sanity check or for recovering one of primal or dual solutions from the others. Is that the true?
See below:
Solve the primal with KKT for:
- Primal Feasibility
- Vanishing Gradient
Solve the dual with KKT for :
- Dual Feasibility
- Vanishing Gradient
Then recover primal from dual solution or vice versa with:
- Complementary Slackness
Is the observation above is true?
Context: The question raised while dealing with dual of support vector machine training in machine learning.