Minima/Maxima of a ILP over any compact set

228 Views Asked by At

I am reading this paper "Lagrangian Relaxation for integer programming" by A.M. GEOFFRION. In the paper, to prove one of the lemmas authors use the theorem that

"minimum value of a linear function over any compact set is not changed if the set is replaced by its convex hull"

I found this bit puzzling. As we know, there exists duality gap between an ILP and corresponding relaxed LP (except in some cases). In the original ILP, if the integer variables are constrained to be either 0 or 1, then the LP relaxation (by relaxing the variables to be in $[0,1]$ in the compact set should give us the optimal value for ILP right ?

If the ILP is $$ \begin{equation} \text{minimize}~~ c^\top x \\ \text{subject to}~~ Ax \leq b, x_i \in \{0,1\} \end{equation} $$ based on the above theorem, the relaxed version given below should give us an optimal integral solution, irrespective of the modularity of $A$

$$ \begin{equation} \text{minimize}~~ c^\top x \\ \text{subject to}~~ ch(Ax \leq b, x_i \in \{0,1\}) \end{equation} $$ where $ch()$ is the convex hull and I think here the convex hull can be represented as $$ Ax \leq b, 0\leq x_i \leq 1 ~~\forall i $$

What am I doing wrong ? I understood the part about the minima/maxima over a compact set and the its convex hull. My question is more about the solution for the ILP and corresponding relaxed version. Based on the theorem, the relaxed version should always give optimal solution.

2

There are 2 best solutions below

0
On BEST ANSWER

In general, it is not true that the relaxed version gives the convex hull: $$ch(\{x \in \{0,1\}^n \mid Ax \leq b\}) \neq \{x \in [0,1]^n \mid Ax \leq b\}.$$

But if we did know the convex hull, then the integer program would reduce to a linear program.

A simple example is given by the set $$X = \{ (x_1,x_2) \in \{0,1\}^2 \mid x_1 + x_2 \leq \frac{1}{2}\} = \{(0,0)\}$$ So $ch(X)$ is a single point, but the relaxation of $X$ is a triangle.

Linear functions have a constant gradient. Assuming this gradient isn't 0, geometrically, the intuition of the statement by Geoffrion is that to reach the highest point in a compact convex set, you just need to keep walking uphill. If you're not on the boundary of the set, there will always be a direction to walk that will be uphill.

Hope this helps.

1
On

I am not very familiar with linear programming, so I'm afraid I can't quickly figure out what you're doing wrong. What I can do is give an explanation in linear algebra, hopefully that helps you understand better what is going on.

First recall that a convex hull is defined as the set of finite sums with positive coefficients summing to $1$, i.e. If $A\subset X$ is a subset of a vector space $X$, then the convex hull of $A$ is $$\text{co}(A)=\left\{\sum^{n}_{i=1}\lambda_{i}x_{i}:n\in\mathbb{n},\text{ for all }1\leq i\leq n\text{ we have }x_{i}\in A,\lambda_{i}\geq0\text{ and }\sum^{n}_{i=1}\lambda_{i}=1\right\}.$$

Now let $f$ be a linear function on $X$ and suppose it has a minimum on $A$, this is guaranteed if $A$ is compact. Now suppose that $\inf_{x\in\text{co}(A)} f(x)<\min_{x\in A}f(x)$, then there is a $y\in\text{co}(A)$ such that $f(y)<\min_{x\in A}f(x)$ for all $x\in A$. As $y\in\text{co}(A)$ we can find $x_{1},...,x_{n}\in A$ and $\lambda_{1},...,\lambda_{n}\geq0$ such that $\sum_{i=1}^{n}\lambda_{i}=1$ and $\sum_{i=1}^{n}\lambda_{i}x_{i}=y$. Since $f$ is linear and $x_{i}\in A$ we find $$f(y)=\sum^{n}_{i=1}\lambda_{i}f(x_{i})\geq\min_{x\in A}f(x)\sum^{n}_{i=1}\lambda_{i}=\min_{x\in A}f(x)$$ which contradicts our assumptions. Hence $f$ retains its minimum over the convex hull. A similar argument shows the same for the infimum, supremum and maximum.

I hope this helps you understand the statement, let me know if anything is still unclear.