Basic questions about convex optimization

156 Views Asked by At

I have some basic questions about convex optimization.

  1. From finding sources online, I've seen that many algorithms (for example, Newton's method) describe themselves as $o(\frac{1}{\epsilon})$. What exactly does $\epsilon$ mean? Does it mean that the point $x$ found by the algorithm is within $\epsilon$ distance of the optimal point $p$, or does it mean that $f(x)$ is within $\epsilon$ distance of $f(p)$? If it's the later, is it a constant factor of $\epsilon$ (i.e. $f(x) + \epsilon \ge f(p)$) or is it a multiplicative factor of $\epsilon$ (i.e. $f(x) \ge (1 - \epsilon)f(p)$)?
  2. I have seen convex optimization problems phrased as:

Choose $x$ to minimize $f$ subject to $g_1(x) \le 0, \dots, g_n(x) \le 0$, where $f, g_1, \dots, g_n$ are all convex functions.

Do convex optimization algorithms find an approximate minimization of $f$ subject to the $g_j$ constraints? Or do they find an approximate minimization of $f$ that comes within $\epsilon$ of satisfying the $g_j$ constraints?

And my final question: is there an online resource you can point me at that offers a gentle overview of convex optimization algorithms (not necessarily how they work, but more what they do) that might answer questions like the above? If not online, a print resource would be helpful, too. I've got access to a university library system.

Thanks!