I am looking for an optimization algorithm / method / library that can efficiently seek for an estimated local maximum of a function. E.g. the function function could be f(x), x > 0 and x < 10 (and I am looking for an accuracy of 0.1). It is expensive to evaluate f(x), so I want to get the maximum with as few calculations of f(x) as possible.
I suspect a relationship between x and y but I am not certain of this, so the whole optimization may or may not yield a usable result.
I could, for a naive approach, evaluate f(0.1) and f(9.9) and if f(0.1) > f(9.9) then I could compare f(0.1) and f(5.0) and so on. I will end up with a local maximum after n steps. This is quite the simpleton approach, but it would be ok for me if I was dealing with only one parameter.
In reality however, I am looking for the maximum of a function with multiple parameters, e.g. f(x1, x2) where x1, x2 > 0 and x1, x2 < 10. I am dealing with eight or more parameters, but I think that does not make much of a difference.
I could reuse the simpleton approach and permutate x1 and x2, but this will a) give me a lot more variations to test and b) not take into account that there may be a relationship between x1 and x2 - a fact that may boost the optimization.
I think that a machine learning approach could help, but ML would usually look at available results of f(x1, x2), learn from that data and estimate a maximum. It can not - or at least I don't have an algo at hand that can do this - decide which f(x1, x2) it would test and try to get an estimate with as few calculations of f(x1, x2) as possible.
I think a gradient ascent optimization could be a good match for my requirements, but all examples and implementations of the algorithm seem to work on precalculated data, too. Plus the algorithm does not seem to consider relationships between the parameters.
So I was hoping I could get more ideas here. I could implement the algorithm or ideally a free / open source implementation is available.