How to handle $x_{ij}x_{ji}=0$

96 Views Asked by At

Given a complete bi-directional graph $G=(V,\overrightarrow{E})$, let's consider the following formulation:

$$\begin{align} min \quad & \sum_{k\in {V}} \Big(\alpha_{k}-\sum_{(j,k) \in \overrightarrow{E}} x_{jk}\alpha_{j}\Big)^2 + \sum_{(j,k) \in \overrightarrow{E}} c^T g \\ & x_{ij} \leq M g_{ij} \quad (i,j) \in \overrightarrow{E} \\ & \psi_{ij}+ \psi_{ji}=1 \quad (i,j) \in \overrightarrow{E} \\ & g_{ij} \leq \psi_{ij} \quad (i,j) \in \overrightarrow{E} \\ & A \psi \leq t \\ & g_{ij}, \psi_{ij} \in \{0,1\} \quad (i,j) \in \overrightarrow{E}\\ & x_{ij} \geq 0 \quad (i,j) \in \overrightarrow{E}\end{align}$$

This is a Mix-Integer Quadratic Optimization which can be solved by opt solvers (e.g., Gurobi). That said, this formulation has poor relaxation because of big-M constraint.

Alternative formulation: Since we have $\psi_{ij} + \psi_{ji}=1$, it is clear that we have a disjuction constraint in the form of $x_{ij} x_{ji}=0$.

What would be an effective way to deal with this disjuction constraint. In the same spirit, can the decision variable $x_{ij}$ be tighten with other inequalities?

My first attemt is to add this inequality: $x_{ij}x_{ji} \leq g_{ij}^2$ to tighten the range for $x_{ij}$ but it makes the problem non-linear (cannot be solved by Gurobi anymore).

Edit: There is no natural way to identify a tight big-M for this formulation. What I do is to remove all constraints and optimzie the convex quadratic objective function (without g). Let $x_{ij}^{\star}$ be the solution to this problem. Then, we set $M= 5\times max_{i,j}(x^\star_{ij})$. The exact way to identigy M is both very difficult to solve and also gives way too big M.

1

There are 1 best solutions below

3
On BEST ANSWER

It's hard to give a definite answer without more insight into the problem, but a few thoughts come to mind.

  1. If the $A$ matrix is nonnegative, then $Ag\le A\psi$, in which case I think you can eliminate $\psi$ from the formulation (and replace $\psi_{ij} + \psi_{ji} = $ with $g_{ij} + g_{ji} \le 1$.
  2. Larry asked about how you compute $M$. You may already be doing this, but I would suggest that $M$ be $M_{ij}$. There's no need to use the same value of $M$ in every constraint. If you can provide a priori upper bounds $M_{ij}$ for each $x_{ij}$ (the tighter the better), your current formulation might perform better.
  3. Assuming neither of the above helps enough, you might try a version of Benders decomposition (with no guarantee that it will help). You could replace the first term of the objective with a variable $z\ge 0$ and remove the $x$ variables and the big-$M$ constraints from the master problem. In a subproblem, invoked when the solver thinks it has a solution to the master, you minimize the quadratic expression subject to $x_{ij}\le 0$ for all $(i,j)\in S=\{(m,n) : g^*_{mn} = 0\}$, where $g^*$ comes from the hypothetical master solution. Let's say that minimum of the subproblem objective comes out $\hat{z}$. The constraint $z\ge \hat{z}\left(1-\sum_{(i,j)\in S} g_{ij}\right)$ is then valid in the master problem. How weak it is would be an empirical question. It might also be possible to shrink $S$ a bit in the new constraint, by eliminating terms for which the KKT multiplier of the corresponding constraint $x_{ij}\le 0$ in the subproblem had the wrong sign (meaning that letting $x_{ij}$ take a positive value would not help $\hat{z}$) ... if in fact that every happened.