Estimating the convexity of black box objective function , by multi starting a gradient based method.

266 Views Asked by At

In the context of optimisation of a differentiable black box function, I have a simulation, which gives me some scalar result depending on the choice of some continuous design variables with upper and lower bounds.

My idea was to run a gradient based optimisation algorithm (like conjugate gradient), from many different starting points. And if the algorithm always returns to the same (or very similar) value of the design variables, to conclude that there is a global optimum, no local optima, thus that the function is likely convex. And conversely if it returns different optima that the function must be non convex.

My primary question is if this line of reasoning is valid? and secondly if there are maybe better ways to estimate convexity for black box functions?

1

There are 1 best solutions below

4
On

This line of reasoning is not valid, for two simple reasons:

  1. Nonconvex functions can also have one global minimizer. For example $f(x)=-e^{-x^2}$
  2. You have no way of knowing which point will the gradient method converge to.

I do not recommend such 'experimental' methods, since they do not reveal why the function is convex. However, if you do insist on checking convexity on some compact domain $D$ by performing a simulation, I would do it using the gradient inequality. Namely, choose random points $x, y \in D$ and verify that $$ f(y) \geq f(x) + \nabla f(x)^T (y - x) $$ and $$ f(x) \geq f(y) + \nabla f(y)^T (x - y) $$ If for some pair of points one of the inequalities does not hold, the function is not convex.