While solving a MILP via subsequent Linear Relaxations are relaxed Indicator variables and constraints still useful to guide the objective function?

105 Views Asked by At

I have a multi objective Mixed Quadratic binary non-linear problem.

Following a scalarization approach, the objective function includes the sum of some binary variables (say Z_j) minus lambda times the sum of the products of some other binary variables (say x_i * y_k).

Furthermore, a set of constraints include the product of these optimization variables (x_i * y_k).

I derived a MILP formulation by linearizing each product of two binary variables, through 0-1 indicator variables I_d (and the necessary constraints). Hence, I replaced the sum of x_i * y_k with the sum of the indicators in the objective function.

To reach an approximate solution in reasonable time, I am working on a costructive iterative algorithm based on succesive linear relaxation of the original problem.

While I can easily understand the rationale and fix the values of Z_j according to a rounding procedure, I am not confident enough that relaxing the indicator variables still allows to reasonably apply the rounding technique to x_i , and y_k.

According to your knowledge and expertise, should I still leave indicator variables in the continuous relaxation of the MILP formualtion, or should I try another path to solve my problem? Any suggestion or reference to documentation I should study is highly appreciated.

Thanks in advance.

EDIT: In the following I try to introduce an example. I apologize for the poor formalism.

My system includes a set of $j, j=1..A$ applications. Each application is decomposed in a set of $n$ sub-tasks. Among them, a task is defined master $m_j$, and the others $n-1$ are defined slaves $s$, where $S^j$ is the set of slaves of j.

I need to assign tasks to computing units (CU) $k =1..C$. A network connects each couple of computing units $k_1$ and $k_2$, and the amount of traffic is bounded to $B$.

An application is succesfully admitted (i.e. $z_j=1$) if all of its sub-tasks are assigned. For each application, an amount of data $d_j$ has to be transmitted from each slave to the master. If the master and the slaves of $j$ resides on the same computing unit, we don't have traffic flow. Otherwise, we should account for the traffic sent from the computing units running the slaves to the computing unit running the master.

Goal (1), maximize the number of admitted applications, while minimize the traffic among units.

List of variables:

$ h_{m_j} $ tells if the master task of $j$ is assigned to any CU

$ h_{js} $ tells if the slave task $s$ of $j$ is assigned to any CU

$ y_{km} $ tells if the master task $m$ is assigned to CU $k$

$ x_{ks} $ tells if the slave task $s$ is assigned to CU $k$

$ F_{k_1k_2j} $ is the network flow between two CU due to tasks of $j $

$ F_{k_1k_2} $ is the network flow between two CU

$(1) \max (\sum_{j=1}^{A} z_j ) - \lambda F$

$(2) h_{m_j} = 1 - \prod_{k=1}^{C} (1-y_{km_j}) \qquad \forall j$

$(3) h_{js} = 1 - \prod_{k=1}^{C} (1-x_{ks}) \qquad \forall j, s$

$(4) z_j = (h_{m_j} + \sum_{s \in S^j} h_{js}) / n \qquad \forall j $

$(5) \sum_{k=1}^{C} x_{ks} \le 1 \qquad \forall s$

$(6) \sum_{k=1}^{C} y_{km} \le 1 \qquad \forall m$

$(7) F_{k_1k_2j} = y_{k_2m_j} \times (\sum_{s \in S^j} x_{k_1s} \times d_j ) \qquad \forall j, k_1 ,k_2 \neq k_1$

$(8) F_{k_1k_2} = sum_{j=1}^{A} F_{k_1k_2j} \qquad \forall k_1 ,k_2 \neq k_1$

$(9) F_{k_1k_2} \le B \qquad \forall k_1 ,k_2 \neq k_1 $

$(10) F= \sum_{k_1=1}^{C} \sum_{k_2=1,k_2 \neq k_1}^{C} F_{k_1k_2}$

with $z, h, x, y$ binary, and $F$ continuous.

Regarding the linearization of (7) that includes the product of two binary variables, I introduced the following variables $\theta$ binary.

$\theta_{k_1sk_2m_j} \ge y_{k1s}$

$\theta_{k_1sk_2m_j} \ge x_{k2m_j}$

$\theta_{k_1sk_2m_j} \le x_{k2m_j} + y_{k1s} -1$

Therefore (7) is rewritten to

$(7) F_{k_1k_2j} = (\sum_{s \in S^j} \theta_{k_1sk_2m_j} \times d_j ) \qquad \forall j, k_1 ,k_2 \neq k_1$

I omit the linearization of(2) and (3), different budget constraints on CUs, and other constraints that prevents the obvious solution "put every sub-task on the same host untill it is full" to make sense. To clarify, some slave tasks may be constrained to run on a subset of certain CUs, so in the latter case the master shall be placed on the CU with the maximum amount of assigned slaves.

The idea of the iterative algorithm is to: solve the LP relaxation, pick the highest $z_j$ and set it to one. Verify that LP relaxation still holds. If true, iteratively assign slave tasks to a CU for $j$, and the master task of $j$ depending on the continuous values of $x$ and $y$.

I know that we can relax integarlity on $\theta$, and the constraints will still hold. But, this is true if $x$ and $y$ are fixed to integer values. If that is not the case (e.g. in the middle of the proposed iterative algorithm), $theta$ may lie in between of 0-1, and I am not sure that it will still guide the Heuristic to a solution with the least value of $F$.