Can non-convex optimization problems have closed form solutions?

626 Views Asked by At

In the realm of statistical modelling, creating a statistical model with respect to some data involves optimizing some mathematical function (e.g. Loss Function, OLS Equation, Maximum Likelihood Equation, etc. ) : Determining the final beta regression coefficients involves solving an optimization equation.

From what I see:

  • Sometimes, the equation to be optimized has a "closed form solution". For example, estimating the coefficients for a Linear Regression model involves optimizing an OLS or an MLE equation - but this optimization equation has a "closed form solution". This means that we can directly estimate the regression coefficients and do not require an iterative optimization algorithm. I have noticed that many of these instances where the optimization problem has a "closed form solution", the equation (i.e. the mathematical function being optimized) is usually "Convex".

  • However, in most cases these optimization equations do not have "closed form solutions" (e.g. Neural Networks, Logistic Regression, etc.). In these cases, an iterative optimization algorithm (e.g. Gradient Descent) has to be used to estimate the model parameters (e.g. weights of a Neural Network). I have also noticed that when this is the case, the optimization equation (i.e. the mathematical function being optimized) is usually "Non-Convex".

I understand that optimizing some Convex Functions have "closed form solutions" while other Convex Functions do not have "closed form solutions".

But:

  • Can a mathematical problem that is Non-Convex (i.e. a Non-Convex Function) ever have a "closed form solution"? Or is this never the case?

Thank you!

1

There are 1 best solutions below

5
On BEST ANSWER

Certainly, nonconvex optimization problems can have a closed-form solution, but, in general, they do not. Nonconvex optimization is, in general, NP-hard. This means, loosely speaking, that the time it takes to find a global minimum grows exponentially with the problem size. This exponential growth can often render finding a global minimum an infeasible goal (sometimes could take years on supercomputers), so we often settle for finding a local minimum. Problem size in this context could mean the number of dimensions of the domain of the function, how many grid points you have in a numerical approximation to the function, etc.

A simple example of a nonconvex optimization problem with a closed-form solution is

\begin{align} \min_{ x \in [0,2\pi] } \cos(x). \end{align}

$\cos(x)$ is nonconvex, but clearly the answer to this particular nonconvex problem is $\cos(\pi)=-1$ at $x=\pi$. But note that this is a somewhat contrived example; whenever you see "nonconvex optimization," you should think "that's probably a hard problem unless there is some special structure to the particular problem that I'm looking at."