Regret minimization for online convex optimization

53 Views Asked by At

I'm reading the paper "Online Convex Programming and Generalized Infinitesimal Gradient Ascent" (Zhinkevich, 2003) that talks about gradient descent and its properties in the online convex setting, and I am stuck with two questions:

  1. I cannot see how the proof of Theorem 1 uses the fact that $x^*$ is the minimizer of $C_x(T)$. It seems that we can replace $x^*$ with any other point in the domain and still get the bound.

  2. This is a more general question about bounding the regret. Does it tell us anything about convergence? If all $c_t$ in this paper were the same, then I can see that the regret is always non-negative, hence, sublinear bound on it guarantees convergence in some sense. But if $c_t$ are different (so the regret can be negative), can we conlude anything from this bound?

Thanks in advance!