Linear programming with many constraints and few unknown variables

384 Views Asked by At

Vectors $x, y \in \mathbb{R}^{n\times 1}$ satisfy $$ \forall_{i,j}\; x_i + y_j \leq a_{i,j} $$ where $a\in \mathbb{R}^{n\times n}$ is a given matrix. The question is to find the maximum value of $\sum_i x_i + \sum_j y_j$.

My idea

Suppose $t$ is a bijection defined on $\{1,..,n\}\mapsto \{1,..,n\}$, then we have $$ \sum_i x_i + \sum_j y_j \leq \sum_k a_{k\,t(k)} $$ which further introduce $$ \sum_i x_i + \sum_j y_j \leq \min_t \sum_k a_{k\,t(k)} $$

What's left is to prove that $\min_t \sum_k a_{k\,t(k)}$ can be reached. That is to say, prove there exist vectors $x, y$ such that $$ \begin{cases} x_i + y_j \leq a_{i,j},\;\;\forall_{i,j}\\ \sum_i x_i + \sum_j y_j = \min_t \sum_k a_{k\,t(k)} \end{cases} $$

2

There are 2 best solutions below

5
On BEST ANSWER

Your upper bound is good and corresponds to finding a min weight match over permutation matrices. Or you can prove a related upper bound directly:

Related (in fact, identical) upper bound

Let $P=(p_{ij})$ be a "doubly stochastic" $n \times n$ matrix so that \begin{align} \sum_{j=1}^n p_{ij} &= 1 \quad \forall i \in \{1, ..., n\} \\ \sum_{i=1}^n p_{ij} &=1 \quad \forall j \in \{1, ..., n\} \\ p_{ij} &\geq 0 \quad \forall i, j \end{align} Then if $(x_i), (y_j)$ are values that satisfy $x_i + y_j \leq a_{ij}$ for all $i,j \in \{1, ..., n\}$, then you can show (using methods similar to your existing upper bound) that: $$\sum_{i=1}^n x_i + \sum_{j=1}^n y_j \leq \sum_{ij} p_{ij} a_{ij} $$ So the best upper bound of this type is solved by the following linear program

\begin{align} \mbox{Minimize:} & \quad \sum_{ij} p_{ij}a_{ij} \\ \mbox{Subject to:} & \sum_{j=1}^n p_{ij} = 1 \quad \forall i \in \{1, ..., n\} \\ &\sum_{i=1}^n p_{ij} =1 \quad \forall j \in \{1, ..., n\} \\ &p_{ij} \geq 0 \quad \forall i, j \end{align} It turns out that this upper bound is identical to your upper bound of the min-weight match over the matrix $(a_{ij})$ because every doubly stochastic matrix $(p_{ij})$ can be expressed as a convex combination of permutation matrices (which is sometimes called the Birkhoff-von Neumann theorem), and so minimizing a linear function over doubly stochastic matrices can be reduced to minimizing the same function over permutation matrices (since permutation matrices are the "extreme points" of the set of doubly stochastic matrices).

Matching lower bound

Try using linear program duality to take the dual of the above linear program with $p_{ij}$ variables to convert to a linear program with dual variables $x_i,y_j$.

0
On

Since this seems to not be a homework, it may be useful to have the following "appendix" to my answer (although this may also be a spoiler to the do-it-yourself flavor of my prior answer). I give this because some treatments of duality theory assume a particular structure that is not convenient for this problem.


You have the (primal) LP: \begin{align} \mbox{Minimize:} &\quad -\left(\sum_{i=1}^n x_i + \sum_{i=1}^n y_i\right) \\ \mbox{Subject to:} &\quad x_i + y_j \leq a_{ij} \quad \forall i,j \in \{1, ..., n\} \end{align} The optimal objective value is the same as: \begin{align} &\inf_{x,y\in \mathbb{R}^n} \sup_{p\geq 0}\left[-\left(\sum_{i=1}^n x_i + \sum_{j=1}^n y_j\right) + \sum_{ij}p_{ij}\left(x_i+y_j-a_{ij}\right) \right]\\ &\overset{(a)}{=}\inf_{x,y\in \mathbb{R}^n}\sup_{p\geq 0}\left[-\sum_{ij}p_{ij}a_{ij} + \sum_{i=1}^nx_i\left(-1+\sum_{j=1}^np_{ij}\right)+\sum_{j=1}^ny_j\left(-1+\sum_{i=1}^np_{ij}\right) \right]\\ &\overset{(b)}{=}\sup_{p\geq 0}\inf_{x,y\in \mathbb{R}^n} \left[-\sum_{ij}p_{ij}a_{ij} + \sum_{i=1}^nx_i\left(-1+\sum_{j=1}^np_{ij}\right)+\sum_{j=1}^ny_j\left(-1+\sum_{i=1}^np_{ij}\right) \right]\\ \end{align} where (a) follows by switching the summation order; (b) follows by switching the order of $\inf$ and $\sup$ which can be justified in this case by saddle point theory. The last problem has optimal value the same as that of the (dual) LP: \begin{align} \mbox{Maximize:} & \quad -\sum_{ij} p_{ij} a_{ij} \\ \mbox{Subject to:} & \quad \sum_{j=1}^n p_{ij} = 1 \quad \forall i \in \{1, ..., n\} \\ & \quad \sum_{i=1}^n p_{ij} = 1 \quad \forall j \in \{1, ..., n\} \\ &\quad p_{ij} \geq 0 \quad \forall i,j \in \{1, ..., n\} \end{align}


Why is the primal problem equivalent to the following "2-player game"? $$\inf_{x,y\in \mathbb{R}^n} \sup_{p\geq 0}\underbrace{\left[-\left(\sum_{i=1}^n x_i + \sum_{j=1}^n y_j\right) + \sum_{ij}p_{ij}\left(x_i+y_j-a_{ij}\right) \right]}_{L(x,y,p)}$$ The reason is that player 1 wants to choose $x,y\in\mathbb{R}^n$ to minimize $\sup_{p\geq 0}L(x,y,p)$. If it chooses in a way that violates the constraint $x_i+y_j\leq a_{ij}$ for at least one $(i,j)$ pair, then player 2 can "win" by choosing $p_{ij}\rightarrow \infty$ for that same $(i,j)$ pair, making $L(x,y,p)\rightarrow\infty$. So player 1 must choose $x,y$ to satisfy $x_i + y_j \leq a_{ij}$ for all $(i,j)$. Player 1 can further optimize its choice of $x,y$ as long as these constraints are satisfied, which corresponds to the primal LP.

Since $L(x,y,p)$ is linear in $(x,y)$ for fixed $p$, and linear in $p$ for fixed $(x,y)$, saddle point theory ensures we can switch the $\inf$ and $\sup$ ordering as we did above.