It is well known that the subgradient method applied to the Lagrange dual of a convex problem may produce a sequence converging to the dual optimum, but the primal iterates produced by this sequence will not necessarily converge to the primal optimum.
I'm looking for some intution, or at least a reference on good coverage, of the problem of recovering primal optimal solutions from an optimal dual solution. Here are some closely related points of confusion:
- Why shouldn't we expect the primal sequence to converge to an optimal point?
- What could the primal sequence potentially converge to if not the optimum? Does there exist any illuminating counter-examples?
- When do the two always match? Is it true that it is sufficient that the primal objective function is strictly convex?
- Is strict convexity a necessary condition? What if the primal is strictly convex at the optimal point, but not necessarily everywhere? I.e. is all this fuss caused by potential degeneracy of the primal optimal solution, or is it worse than that?
- Can the dual be degenerate if the primal is not?
That's a lot to answer, but references to nice books or lecture notes would also be very appreciated (research papers appreciated as well, but those generally disregard the intuitive aspect)