How to reformulate with Dantzig-Wolfe decomposition technique

274 Views Asked by At

I am dealing with the following Binary ILP:

\begin{equation*} \label{equation6} minimize \sum_{i=1}^{m}\sum_{j=1}^{n}\sum_{t=0}^{T-p_{ij}}e_{ij}x_{ijt} \end{equation*}

subject to

\begin{equation} \label{equation3} \sum_{i=1}^{m}\sum_{t=0}^{T-p_{ij}}x_{ijt}=1, \quad (j=1....n) \end{equation}

\begin{equation} \label{equation2} \sum_{i=1}^{m}\sum_{t=0}^{T-p_{ij}}(t+p_{ij})x_{ijt}\leq D, \quad (j=1....n) \end{equation}

\begin{equation} \label{equation5} \sum_{j=1}^{n}\sum_{s=max(0,t-p_{ij})}^{t-1} RR_{j}x_{ijs} \leq RC_{i}, \quad (i=1....m,\ t=0....T) \end{equation}

\begin{equation} \begin{aligned} \sum_{i=1}^{m}\sum_{s=max(0,t-p_{ij})}^{T-p_{ij}} x_{ijs} + & \sum_{i=1}^{m}\sum_{s=0}^{t-1} x_{ij's} \leq 1,\\ & (j=1....n', \ j'=n'+1....n, \ t=0....T) \end{aligned} \end{equation}

\begin{equation*} x_{ijt} \in \{0,1\} \end{equation*}

where

Indices have following meaning

$ j $= Index of tasks to be scheduled $ (i = 1, . . . , n', n'+1,.....n) $.

$ i $ =Index of parallel machines $ (j = 1, . . . , m) $.

$ t $ =Index of time in the scheduling horizon $ (t = 0, . . . , T ) $.

Parameters (non-negative integer numbers) have following interpretations:

$ n $ = The total number of tasks. Actually, there are two sets of tasks. Set one comprises tasks from 1 to n' and set two comprises the tasks from n'+1 to n. This has already shown above in meaning of index $j$.

$ m $ = The total number of parallel machines.

$ RR_{j} $ = The amount of resource (Memory) required by task $ j $.

$ RC_{i} $ = the amount of Available resource (Memory) in machine $ i $.

$ p_{ij} $ = Processing time of task j on machine $ i $.

$ e_{ij} $ = Energy consumption while executing task $ j $ on machine $ i $

Decision variable has the following definition:

$ x_{ijt} $ = 1 if taks $ j $ is assigned on machine $ i $ at time $ t $ and 0 otherwise

A note on the problem description:

$n$ tasks are to be assigned on $m$ parallel machines over the time horizon $t$. The objective is to minimize the total energy consumption. There are total four constraints in the formulation. From top to down, Constraint 1 (assignment constraint) requires each task to be assigned to one machine only once. Constraint 2 (deadline constraint) requires that all tasks must finish their computation before the user specified deadline. Constraint 3 (capacity constraint) which requires that at any time period tasks assigned to any machine should not consume resources more than machine's capacity. The last constraint establishes the temporal dependency between the tasks in set one and tasks in set two.

Here is my question

I have implemented this IP in the IBM ILOG Cplex studio. This model is working fine and is producing accurate schedules. However, its taking too much time for a large number of tasks and machines as the number of variable and constraints increases accordingly. In order to improve the time efficiency:

Q1. Can I apply Dantzig-Wolfe decomposition to my formulation? Can anyone help to identify the master problem and other subproblem(s)?

Q2. Is there any possibility to apply Bender's decomposition?

1

There are 1 best solutions below

5
On

Here is one option. You can define schedules for one machine in your subproblem, and merge the schedules in the master problem. Something as follows :

Let $\lambda_{sm}\in \{0,1 \}$ be a binary variable that takes value $1$ if and only if schedule $s \in \Omega_m$ is used for machine $m\in M$. So $\Omega_m$ is the set of feasible schedules for machine $m$. A schedule $s$ on machine $m$ requires $e_{sm}=\sum_{j \mid j \in s} e_{mj}$ units of energy.

You want to minimize the overall energy consumption : $$ \sum_{m\in M}\sum_{s\in \Omega_m} e_{sm}\lambda_{sm} $$ subject to

  • Each machine has a unique schedule : $$ \sum_{s\in \Omega_m} \lambda_{sm} = 1 \quad \forall m \in M $$
  • Each task $j$ is assigned to a unique machine (your constraint $1$): $$ \sum_{m\in M}\sum_{s\in \Omega_m \mid j \in s} \lambda_{sm} = 1 $$

So this is your master problem. You will relax variables $\lambda_{sm}$. Your other constraints define your subproblem (and thus your feasible schedules). Actually, you will have one subproblem per machine. More specifically, a schedule will be feasible for machine $m$ if deadlines constraints are met for each task of the schedule, if capacity constraints hold for the machine, and if temporal dependency constraints are met for each task. Such schedules will define your $\Omega_{m}$ sets.

You will need to adapt the objective function of your subproblem. It should equal the reduced cost of a schedule $s$ on a given machine $m$, in terms of dual variables of the master problem above.