Prove rigorously the equivalence of two optimization problems

637 Views Asked by At

Consider two optimization problems:

P1: \begin{aligned} \max &~x_1\log(1+w_1)+x_2\log(1+w_2)\\ \text{s.t.} &~x_1 w_1 + x_2 w_2 \leq a \\ &~x_1,x_2\in\{0,1\}\\ &~w_1,w_2\geq 0 \end{aligned}

P2: \begin{aligned} \max &~x_1\log(1+y_1/x_1)+x_2\log(1+y_2/x_2)\\ \text{s.t.} &~y_1 + y_2 \leq a \\ &~x_1,x_2\in\{0,1\}\\ &~y_1,y_2\geq 0 \end{aligned}

Notice that $x_1,x_2$ are binary variables. Basically, I used $y=x\cdot w$ to replace $w_1$ and $w_2$. I claimed the equivalence of P1 and P2 in a paper submission without any proof. However, the reviewer challenged me about this issue. So how can I prove the equivalence rigorously for the sake of the reviewer?

I think the key point is that $\lim_{x_1\rightarrow0} x_1\log(1+y_1/x_1) = 0$ . Then I can say that any feasible point $(x,w)$ in P1 is also feasible in P2, and also yields the same objective value in P2. The proof is completed in this way?

2

There are 2 best solutions below

1
On

Update: Ignore this; I forgot $y$ is bounded.

I suspect that, in claiming $\lim_{x_{1}\rightarrow0}x_{1}\log\left(1+y_{1}/x_{1}\right)=0$, you are tacitly assuming that $y_1$ is fixed. What if $x_1\rightarrow 0$ with $y_1 = \exp(1/x_1)$? What if $y_1=\exp(x_1^{-2})$?

0
On

Sorry, I somehow lost track of $y_i$ being bounded. So your limit is fine, but there's still a catch. You would normally want to establish a bijection between the feasible regions of P1 and P2, but you can't do that here. For instance, $x_1=0$ and $w_1=17$ can occur in feasible solutions in P1, but you can't get $w_1=17$ from any feasible solution of P2. So part of what you need to do is split into four cases, according to the values of $x_1$ and $x_2$. For $x_1 = 1 = x_2$, finding a correspondence between feasible values of $w$ and $y$ is trivial. For the other three cases, you need to show that any objective value attainable in either problem is also attainable in the other problem. To quote a former prof of mine, that's "tedious but brutally straightforward".

The other thing you need to consider is that, while the problems are mathematically equivalent, they're not equivalent to a solver. For any combination of $x$ values, P1 reduces to maximizing a smooth concave function with linear constraints, which is fine. If either $x_1$ or $x_2$ is zero, however, your objective function in P2 is technically undefined, as dividing by zero is prohibited under the Geneva Conventions. So you would need to find a solver that can handle special cases in the objective function (e.g., objective is $\log(1+y_2)$ if $x=(0,1)$), or partition into four separate problems, or some other kludge.