Optimization over convex majorants of a function

203 Views Asked by At

Fix a well-behaved function $g:[0,1]\to\mathbb{R}$. (Say continuous, though I would want to relax this.)

Consider the set of convex majorants of $g$: $$A = \{f:[0,1] \to \mathbb{R} \;\;|\;\; f \text{ is convex}, \;f(x) \ge g(x) \;\forall x\in[0,1]\}.$$

Given a probability measure $\mu$ on $[0,1]$, consider the problem $$\min_{f \in A} \int_0^1 f(x)d\mu(x).$$

Are there known results about this kind of problem? For example:

  • Sufficient conditions on $g$ and $\mu$ that guarantee a solution?

  • Useful relaxations of the convexity constraint (note that typically the optimal $f$ is not differentiable)?

  • Techniques from calculus of variations that can be adapted to this "non-smooth" question?

1

There are 1 best solutions below

0
On BEST ANSWER

A linear programming problem

One can show that the feasible set

$$A=\{f:[0,1]\rightarrow R_{\,}|_{\,}f\mbox{ is convex},f(x)\geq g(x),\,\forall\,x∈[0,1]\}$$

is a convex set. This is because for two functions $f_1(x),f_2(x)\in A$, we have

$$\lambda f_1(x)+(1-\lambda)f_2(x)\geq[\lambda+(1-\lambda)]\,g(x)=g(x),$$

for all $x,\lambda\in[0,1]$. And also, $\lambda f_1(x)+(1-\lambda)f_2(x)$ is convex if $f_1(x)$ and $f_2(x)$ are both convex functions, for $\lambda\in[0,1]$. Therefore $\lambda f_1(x)+(1-\lambda)f_2(x)\in A$. The convexity of $A$ still holds even if the "$f$ is convex" condition is removed. But that makes the minimization problem trivial, because the optimal solution would always be $g(x)$ whenever the density of $\mu(x)$ is everywhere positive. So I think we would like to keep the "$f$ is convex" condition to find a convex majorant of $g(x)$ that is closest to $g(x)$ in the sense of minimizing the error

$$e[f]=\int_0^1[f(x)-g(x)]d\mu(x).$$

Adding a constant doesn't change the optimization problem. The error functional $e[f]$ is linear with an inhomogeneous term (the mathematical word is "affine"). So the entire optimization problem is convex and is solvable using linear programming after discretization. You can solve a few examples using the discretized formulation of the problem below:

\begin{align} &\mbox{Minimize }\,\sum_{i=0}^N w_if_i\\ &\mbox{Subject to }\;f_i\geq g_i,\;i=0,1,\ldots,N,\\ &\phantom{\mbox{Subject to }}\;f_i\geq\frac{f_{i-1}+f_{i+1}}{2},\;i=1,2,\ldots,N-1, \end{align}

where $f_i\equiv f(i/N)$ and similarly for $g(x)$ and $w(x)=\mu'(x)$. Any linear programming package can solve this. There is no guarantee that the solution $f(x)$ will be smooth (e.g. $|x-\frac{1}{2}|$ is convex but not smooth at $x=\frac{1}{2}$), so calculus of variation is not of too much help here. Singularities of $g(x)$ are not too serious of an issue if $\,i/N$ avoids the singularities.


Soft convexity constraint

If you look for relaxations on the convexity of $f$, you're no longer seeking a strictly convex majorant of $g(x)$. Probably you can replace the inviolable constraints of $f_i\geq(f_{i-1}+f_{i+1})/2\,$ by a soft non-convexity penalty. Then you do

\begin{align} &\mbox{Minimize }\,\sum_{i=0}^N w_if_i+C\sum_{i=1}^{N-1}\epsilon_i\\ &\mbox{Subject to }\;f_i\geq g_i,\;i=0,1,\ldots,N,\\ &\phantom{\mbox{Subject to }}\;f_i\geq\frac{f_{i-1}+f_{i+1}}{2}-\epsilon_i,\\ &\phantom{\mbox{Subject to }}\;\epsilon_i\geq 0,\;i=1,2,\ldots,N-1, \end{align}

which is still solvable using linear programming. The parameter $C>0$ reflects how strongly you insist on the convexity condition. In the $C\rightarrow+\infty$ limit, this soft-convexity formulation tends to the strictly convex problem.