MD (Mirror Descent) over l_1 simplex lower bound proof

208 Views Asked by At

I'm looking for a proof of a lower bound of the Mirror Descent optimization algorithm over l_1 simplex. I am not asking to reproduce a proof here, but rather for a reference.

I did go over the Nemirovsky & Yudin book on the subject (A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. Nauka Publishers, Moscow, 1978), but just could not make it through the details, so I am looking for something more suitable for a grad level CS student.

I will try to be more specific with my questions. Please take a look at the section 3.1.4 which gives an overview of mirror descent methods. I have a number of technical questions:

  1. Why is V'(phi) a linear functional on E* if all we know about V is that it is strongly convex and uniformly differentiable?
  2. Phi \in E*, but what is the trajectory \phi(t)?
  3. What does differential equation 1.1 mean?

I have seen some other formulations of mirror descent which speak of minimizing local approximation when the locality is measured using Bregman divergence term with some strongly convex potential function. How is this current formulation related to the one provided in the book?

Thanks

enter image description here enter image description here