I know that for general optimization probelm the solution to the dual problem might yeild a lower bound to the primal solution which seems handy.
But, assuming the optimization is $convex$ optimization problem, we know that the solution of the dual problem unites with the primal solution, but in this case of convex optimization problem, it is still required to solve using Lagrange multipliers an optimization problem, so what exactly did we obtain here?
is it the fact that we can use KKT conditions to ease the solution?
is is the fact that I can minimize the lagrangian with respect to $x$ and be sure that I got the solution to the primal?
is it the fact that I can optimize the lagrangian with respect to $\lambda_i$ and plug it back to the lagrangian to optimize with respect to $x$
are those facts that I have written even true?
thanks
In many convex optimization problems, we have strong duality. That is, the optimal value of the primal problem is equal to the optimal value of the dual problem. Strong duality also holds for some (but very few) non-convex optimization problems.
Strong duality always holds for Linear Programming problems that have optimal primal and dual solutions. It is also true of more general convex optimization problems that satisfy any of a number of constraint qualifications. For example, Slater's constraint qualification tells us that if a convex optimization problem has a solution that is strictly feasible with respect to its inequality constraints, then strong duality holds.
Many algorithms for convex optimization problems are primal-dual algorithms that simultaneously solve the primal problem and the dual problem. We can measure progress towards finding the optimal solution by verifying that the primal and dual solutions are feasible and checking the gap between the primal and dual objective values of the current solution. We can also use the Karush-Kuhn-Tucker conditions to check the optimality of our final primal and dual solutions.
Compare this with a primal only algorithm. There's no way to tell for sure from a primal solution alone whether or not that solution is optimal. In practice, we often stop when the iterations seem to have stopped changing.