Question on application of randomized algorithms to solve nonconvex optimization problems

53 Views Asked by At

In Convex Optimization, Boyd & Vandenberghe give the following example of a randomized algorithm to solve a nonlinear convex optimization problem using convex optimization:

[given a nonconvex problem], a candidate solution is found by drawing some number of candidates from a probability distribution and taking the best one found as the approximate solution.

Now suppose the family of distributions from which we'll draw the candidates is parametrized e.g. by its mean and covariance.

We can then ask the question: which of these distributions gives the smallest expected value of the objective? It turns out that this problem is sometimes a convex problem and therefore efficiently solved

(because we have efficient methods for solving convex problems)

How does this give help us minimise the original objective function?

My thinking is as follows. Suppose $f_0(x)$ is the nonconvex objective function. Treat $x$ as a random variable from some distribution with p.d.f $g(x;\theta)$. Then $f_0(x)$ is a function of a random variable and thus a random variable itself. Its expected value is

$$E[f_0(x)]=\int{f_0}(x)g(x;\theta)dx$$

which is a function of $\theta$. I presume this is the problem that is sometimes convex, so let's assume this is the case.

How does finding the $\theta$ that minimises this expectation give us a solution to the nonconvex original problem?