Best derivative-free numerical optimization methods when lenghty function evaluation

197 Views Asked by At

I need to find the global minimum of an optimization problem with seven parameters (all linearly constrained).
My problem is that one evaluation of my function takes about 15 minutes (my code is pretty optimized and I cannot reduce the computation time any further).

As a consequence I need an algorithm which can detect a global minimum (or at least a close area from where it lies) but with a relatively small number of function evaluation (let's say about 1000, 2000 at most). Knowing that I will impose simple constraints (lower and upper bounds) to my parameters.
This algorithm must derivative-free.

So far I went for simulated annealing, but I think there might be better options out there (I find simulated annealing to do quite well for problems with 2-3 parameters, but starts to get long with more parameters). Also tried some versions of Controlled Random Search, results did not seem to be more convincing... As solving the problem takes time, I do not want to launch something only to discover later that I could have solved the problem with a better algorithm.
I'm not so aware of all the different algorithm out there (especially the evolutionary/genetic algorithms), hence my question, do you know which kind of algorithm would be the more suited to my problem? (I can implement it myself on C++ afterwards)

More information about my work (not sure necessary): I am running simulated method of moments. In my model, the choice of parameters impacts the value function of the agents, which I can only solve via value function iteration which takes about 15 minutes, hence the long running time. With the value function I run quickly simulations of my model and obtain simulated moments to compare to real moments, but this part takes almost no time. I have no direct information on how each parameters impact the distance between real and simulated moments (results from my function), hence the need for derivative-free methods.

1

There are 1 best solutions below

3
On

Without knowing anything about your objective function (except that it's hard to evaluate), I don't think there's much that can be said. A sufficiently narrow "well" is likely to be missed by any search algorithm you might use.

If you had bounds on the derivatives, you could find a finite set of points in a compact feasible region such that one of these points is guaranteed to have objective value within $\epsilon$ of the global minimum value. However, if your search space is $7$-dimensional the number of such points will grow like $\epsilon^{-7}$ as $\epsilon \to 0$, so this might not be very helpful.