Convex Optimization: Advantages of Symmetric Primal-Dual Algorithms?

273 Views Asked by At

This is a follow-up to an answer on a previous question on PD algorithms: https://math.stackexchange.com/a/1193928/36257

I have done some research learning the mechanics of how Infeasible PD algorithms work. In particular, Prof. Vandenberghe's lectures here were useful:

http://www.seas.ucla.edu/~vandenbe/236C/ (lectures 14 through 18) http://www.seas.ucla.edu/~vandenbe/236C/lectures/pd.pdf (explains a typical symmetric Infeasible start PD algorithm)

It looks for my research application, I will be well-suited by using ECOS:

https://github.com/embotech/ecos Background: https://web.stanford.edu/~boyd/papers/pdf/ecos_ecc.pdf http://www.seas.ucla.edu/~vandenbe/publications/coneprog.pdf

My question is this: my foray into conic programming began with Mr. Grant's suggestion to look into symmetric PD algorithms, but what are the full list of advantages (over a regular Barrier solver)?

For example, the following is important to me (from the answer):

most state-of-the-art algorithms today employ what is called a symmetric primal dual
method. There are very good reasons to do this, including the ability to start 
from an infeasible point, and even the ability to handle problems that are not   
strictly feasible in the first place

But why do symmetric conic algorithms in particular lead to this? Is it that only they, and not the barrier methods, have properties making this possible (e.g. self-scaling point that preserves the convex cone)? I feel like I learned the mechanics of how they work fairly decently, but I'm missing the bigger picture, the "why" of these algorithms.

To summarize, I want to follow up on the suggestion to use this class of conic PD algorithms for my work, but I need to be able to justify it a little more rigorously. Thanks for any help.