Constraint to unconstraint optimization problem by subsitution

121 Views Asked by At

Given the following convex optimization problem

$\min_{x,p} ||x|| - p$

subject to $p > 0$

Can I change the above to an unconstrained convex optimization problem by substituting $c = \log(p)$ and minimize

$\min_{x,c} ||x|| - \exp(c)$

If this is possible I am interested in literature about this topic, but I couldn't find much.

Edit: This is just a minimal example. The problem in the example is of course unbounded, but I think my full optimization problem adds more complexity than necessary to answer my question.

3

There are 3 best solutions below

0
On BEST ANSWER

I would suggest that you reconsider sparing us from the complexity of your problem. I think it's clear that we're all missing something here.

First of all: as I stated in the comment above, your proposed reformulation is not convex. Your objective function becomes the difference of two convex functions. Given that you've offered the word "convex" in boldface above I assume that is important to you (and it should be!). So the short, but likely unhelpful, answer to your question is no, you cannot do that.

But why would you want to do this in the first place? If you have just a single nonnegativity to deal with, it seems like an unnecessarily complex way to handle it. There's no need to be afraid of constrained convex optimization, especially for a simple constraint like that.

Still, suppose you have good reason to limit yourself to unconstrained minimizations. Then we can still dispatch a simple model like this easily. Consider the model $$\begin{array}{ll}\text{minimize}_{x,p} & f(x,p) \\ \text{subject to} & p > 0 \end{array}$$ where $f$ is convex in $x$ and $p$. Now consider solving a pair of unconstrained problems: $$\begin{array}{ll}\text{minimize}_{x,p} & f(x,p) \end{array}$$ $$\begin{array}{ll}\text{minimize}_x & f(x,\epsilon) \end{array}$$ for some small $\epsilon>0$ (indeed, $\epsilon=0$ might work depending on some mild continuity conditions on $f$). If the first problem obtains $p>0$, then that is the solution to the constrained problem. If the first problem obtains $p\leq 0$, then the second problem yields the solution to the constrained problem (approximately, if $\epsilon\neq 0$).

In practice, we do not actually have to solve the first problem to completion if it is producing $p<0$. We can terminate as soon as we discover (through duality, say) that the optimal solution will have $p\leq 0$. And if you have multiple variables that must be positive, then an active set method might allow us to generalize this approach.

Now, you probably have a very good reason why you can't do something like this. But because you've not shared the full model, we don't know it. As written, this is the best we can offer. If you're willing to use constrained optimization machinery, of course, then there are all sorts of options available to you.

0
On

Yes, of course you can do this. If $\phi$ is a diffeomorphism, over the appropriate domains

$$\min_x\ f(x) = \min_y\ (f\circ \phi)(y).$$

I don't know a reference offhand unfortunately, but it is rather intuitive that this should work -- also you can see that $[J\phi]\nabla f = 0$ if and only if $\nabla f =0$ since $J\phi$ is invertible.

0
On

Assuming there are some unstated constraints (otherwise the problem is just unbounded), I would just say

$$min_{x,p} ||x|| - p - \lambda \log(p)$$

where $\lambda$ acts as scaling factor. This is an old-fashioned approach called SUMT that traces back I guess the '60s. It is a seminal method from Fiacco&McCormick. You solve a sequence of convex problem reducing each time lambda, until the solution does not change. These class of methods are part of the interior-point family. They are usually numerically unstable and not very efficient.

As a practical approach, I would instead solve

$$min_{x,p} ||x|| - p$$ $$ p\geq \epsilon$$

for decreasing values of $\epsilon$ until either the constraint is not binding, or the solution does not change that much.