Unconstrained function minimization when only given samples?

20 Views Asked by At

Let's say we want to minimize a function $f$ of the form:

$$f(x) = \sum_{i=1}^n f(x_i)$$

where $x_i$ are samples. One way to solve these types of problem is via identifying a search direction via something like gradient descent. But how do we find the gradient if we are only given discrete function evaluations? Do we approximate the gradient using finite differences? What about the Hessian?

Another approach could be to identify a trust region, or an approximation, utilizing our datapoints. And then solve the approximation for an optimum. But are there any other approaches other than the two main ones of line search and trust region?