Computational complexity of 0-1 program

736 Views Asked by At

I would like to know the computational complexity of a 0-1 (binary) linear program (BLP) with $ N $ variables. I am using an optimization solver (GUROBI) to get the 0-1 solution vector of a BLP program. However, I would like to have a worst-case computational complexity of the methods used by these kind of solvers. Do you have any useful reference or do you know the complexity? Any information will be welcomed.

1

There are 1 best solutions below

9
On BEST ANSWER

Binary linear programs (or integer linear programs in general) are NP-complete. The easiest way to see this is to note that you can encode the minimum vertex cover problem (which is NP-hard) for a graph $G=(V,E)$ by the binary linear program which has $|V|$ variables $y_v$ for $v\in V$ and $|E|$ constraints:

$\min\sum_{v\in V}y_v$ s.t.

$y_u+y_v\geq1$ for every edge $(u,v)\in E$

$y_v\in\{0,1\}$ for every $v\in V$

this is explained on wikipedia: a solution to the above BLP tells you which vertices to pick for a covering, and conversely a covering for $G$ tells you which variables $y_v$ to set to $1$. Therefore, any solver that exactly solves general binary linear programs cannot be guaranteed to run in polynomial time unless P=NP.

According to GUROBI's site, they solve integer linear programs with the branch-and-bound method (which is intuitively a modification of a depth first search through the graph of all partial solutions, quitting a subsearch whenever the partial solution is already worst than the best one you know so far), which they complement with many heuristics and reductions to speed up the practical running time. All this being said, the computational complexity of this approach is still ultimately exponential (since the exploration space can easily be exponential in the size of the BLP instance).

For a reference, this article describes the worst-case runtime complexity as $O(Mb^d)$ where $M$ is the cost of expanding subproblems, $b$ is the branching factor, and $d$ is the search depth (while this looks like a polynomial-time algorithm, bear in mind that $d$ will depend on the size of the BLP instance, making it actually worst-case exponential time), and they also discuss heuristics and methods for speeding up the algorithm for practical purposes.


To be a bit more explicit, the method GUROBI uses to solve integer linear programs specialises to BLP's as follows:

  • drop the integrality constraints (i.e., replace $x_i\in\{0,1\}$ with $0\leq x_i\leq1$) and solve the LP
  • if all constraints are integers, then we are done!
  • otherwise, suppose $x_i^*$ is a non-integral variable in the solution, then construct two new LP's: one where we assert $x_i=0$ and the other where we assert $x_i=1$ (this is the "branch" part) Note that in a general integral program, we still only have two branches, these being adding the constraint $x_i\leq\lfloor x_i^*\rfloor$ or $x_i\geq\lceil x_i^*\rceil$
  • solve each of these LP's and continue the process on the better of the two (this is the "bound" part)

From this description, we can see that $b=2$ (as we branch a single variable at a time), and if we continue down to depth $N$, the LP will have exhausted all of its variables, so $d=N$ in the worst case. Since an LP in $N$ varaibles without integral constraints can be solved in $O(N^{2.5})$ time, a worst-case running time for GUROBI---not accounting for their heuristics and whatnot---would be $O(N^{2.5}2^N)$.

This was a rough runtime analysis, but since BLP's are NP-hard to solve, a more careful analysis of the runtime will unlikely be much better.