I've just watched a lecture by Ryan Tibshirani where he showed a simple proof that no matter what the criteria $f(x)$ is, even non-convex, as long as equality and inequality conditions fit some reasonable conditions the dual is guaranteed to be convex. The dual $g(u,v)$ is
$$ g(u,v) = \min_x \left\{ f(x) + \sum _{i=1}^{m} u_i h_i(x) + \sum _{i=1}^{r} v_i l_i(x) + \right\} $$
Now doesn't this simply finish the area of optimization? We know we can numericaly solve convex problems, in polynomial time. If we can turn almost any problem into a convex problem through the dual, and the dual is a lower bound on the real criteron, then we can approximately solve almost all problems right?
The answers to your questions are negative. First of all, just because the dual problem is convex, it does not imply that it can be solved in polynomial time. For example, optimization over co-positive matrices can be co-NP hard in the sense that checking membership over the complement of co-positive matrices is NP hard. Second, even if we can solve the dual (convex) problem, the lower bound can be far away from the primal value, that is, the duality gap can be large in general.