Linearization of a non-linear objective function

992 Views Asked by At

Consider the optimization problem \begin{align} &\min_{x_1,x_2,y_1, y_2,\delta_1, \delta_2} \delta_1 \max{\{x_1,y_1\}} + \delta_2 \max{\{x_2,y_2\}} \\ &x_1,x_2, y_1,y_2 \in[0,1] \\ &\delta_1, \delta_2 \in \{0,1\} \\ &\begin{bmatrix} e_1\\e_2 \end{bmatrix} \leq A\begin{bmatrix} x_1\\x_2 \end{bmatrix} +B\begin{bmatrix} y_1\\y_2 \end{bmatrix} \leq \begin{bmatrix} c_1\\c_2 \end{bmatrix}\\ &1\leq a \delta_1 + b\delta_2 \leq d \end{align} Is it possible to linearize the objective function so that we can use a MILP solver?

1

There are 1 best solutions below

3
On BEST ANSWER

Completely rewritten answer since the OP has changed the problem.

Let $z_{1}$ and $z_{2}$ be continuous variables. Then

$\min z_{1}+z_{2}$

subject to

$z_{1} \geq x_{1} - (1-\delta_{1})$

$z_{1} \geq y_{1} - (1-\delta_{1})$

$z_{2} \geq x_{2} - (1-\delta_{2})$

$z_{2} \geq y_{2} - (1-\delta_{2})$

$z_{1}, z_{2} \geq 0$

These constraints ensure that if $\delta_{i}=1$, then $z_{i}$ is greater than or equal to the maximum of $x_{i}$ and $y_{i}$. If $\delta_{i}=0$, then the constraints ensure that $z_{i} \geq 0$. When $z_{i}$ is minimized, it will take on the smallest possible value.