Black box optimization

113 Views Asked by At

I have a simulation which gives a scalar result depending on the choice of some continuous design variables. I am trying to minimize the output of the simulation. As a first step, I want to study the convexity of the problem. Is there a methodology that allows us to determine whether a black box function (no analytical form) is convex or not ?

I have already found a pair $\{\vec{x}_1, \vec{x}_2\}$ that does not satisfy the midpoint convexity:

\begin{equation} f(\vec{x}_1)+f(\vec{x}_2) \geq 2f\left(\frac{\vec{x}_1+\vec{x}_2}{2}\right) \end{equation}

Is this a sufficient condition to say that the function is nonconvex ? Furthermore, if it is nonconvex how does that affect the choice of the optimizer ?

I have read that evolutionary algorithms could be a good choice for black box optimization. However, I do not have much experience in the field of optimization and I would appreciate it if someone could point me in the right direction.

1

There are 1 best solutions below

0
On

A function $f$ is convex if $\forall x_1, x_2$ you have that the following inequality holds for all $\lambda \in [0,1]$: $$ f(\lambda x_1 + (1 - \lambda) x_2) \le \lambda f( x_1 ) + (1 - \lambda ) f( x_2 ).$$

What is nice about convex functions is that if the gradient is equal to 0 (I assume here that your function is differentiable) at a point $x^\star$, i.e. $\nabla \!f(x^\star) = 0$ then you know that $x^\star$ in an optimal solution (in your case the optimal design variables). The minimum can be reached using gradient descent for example.

You can have an idea whether your function is convex by generating a multitude of pair of points $x_1, x_2$ and the mid-point $0.5*(x_1+x_2)$ by verifying the convexity inequality holds for $\lambda = 0.5$, i.e. $$ f(0.5 (x_1 + x_2)) \le 0.5 f( x_1 ) + 0.5 f( x_2 ).$$

However, it does not constitute a proof of the convexity of your function. Nevertheless, if you can find a pair of points such that $$ f(0.5 (x_1 + x_2)) > 0.5 f( x_1 ) + 0.5 f( x_2 ),$$ then you have a proof that it is not convex (this is what you have I think).

Now, with regards to evolutionary algorithms, it is a method that can lead to local minima but you won't have a proof of it (but anyway you cannot have a proof of the convexity with a black box function). So you could use this algorithm or apply gradient descent where you estimate the gradient with the finite difference method.