A peculiar linear programming: random variables and limits

149 Views Asked by At

I want to ask your point of view on the following problem I am struggling with. Your opinion would be extremely helpful to help me understand how to approach the exercise.

Consider the following linear programming $$ \max_{a_{ij} \text{ }\forall i=1,...,I \text{ }\forall j=1,...,J}\sum_{i=1}^{I}\sum_{j=1}^{J}a_{ij}\epsilon_{ij} $$ $$ \text{s.t. } \\ (1) \text{ }a_{ij}\in \{0,1\} \text{ }\forall i=1,...,I \text{ and } \forall j=1,...,J\\ (2) \text{ }\sum_{j=1}^{J}a_{ij}\leq 1\text{ $\forall i=1,...,I$}\\ (3)\text{ }\sum_{i=1}^{I}a_{ij}\leq 1\text{ $\forall j=1,...,J$}\\ (4) \text{ }\sum_{i=1}^{I}\sum_{j=1}^{J}a_{ij}=b\leq \min\{I,J\} $$

My ultimate objective is to derive the closed form for the the maximum value of the objective function.

The peculiarities of this problem are:

(1) $\{\epsilon_{ij}\}_{i,j}$ are i.i.d. continuous random variables with distribution $F$

(2) I want to study the limiting behaviour of the maximum value of the objective function as $I\rightarrow \infty$ and $J\rightarrow \infty$.


First scenario: the problem does not make sense; in this case, I want to dig deeply and ask your help to find its main issues as at the moment I'm totally blind.

Second scenario: the problem does not contains logical flaws; in this case, my intuition is that in the limit the objective function should become a rescaled expectation by some law of large numbers. Do you have any specific thought that could enlighten me or papers/notes/books in the literature that may related?

1

There are 1 best solutions below

10
On BEST ANSWER

This answer gives details on my comment. Assume $b, I, J$ are positive integers with $I \geq b, J \geq b$, and WLOG assume $I$ is growing to infinity. Define $$ c = \sup \{x \in \mathbb{R} : P[\epsilon_{11}>x]>0\} $$ Define $Val_I^{opt}$ as the optimal (max-weight) value of the matching problem, as a function of $I$.

Case 1: Suppose $c\leq 0$.

In this trivial case we have $Val_I^{opt}=0$ with prob 1, since, with prob 1, all $\epsilon_{ij}$ entries are less than or equal to zero and the best thing to do is to choose nothing.

Case 2: Suppose $c=\infty$.

In this case we have $$ Val_I^{opt} \geq \max_{i \in \{1, ..., I\}, j\in \{1, ..., J\}}\{\epsilon_{ij}\} \overset{I}{\rightarrow} \infty \quad (wp1)$$ and so of course $\lim_{I\rightarrow\infty}E[Val_I^{opt}]=\infty$.

Case 3: Suppose $0<c<\infty$.

In this case it can be shown that \begin{align} &Val_I^{opt} \rightarrow cb \quad (wp1) \\ &\lim_{I\rightarrow\infty} E[Val_I^{opt}] = cb \end{align}

This is because $Val_I^{opt} \geq Val_I^{greedy}$ where $Val_I^{greedy}$ is the value achieved by the following greedy matching strategy:

  • Step 1: Choose the largest entry in the first column.

  • Step 2: Choose the largest entry in the second column, considering only the $I-1$ rows not yet chosen.

  • Step 3: Choose the largest entry in the third column, considering only the $I-2$ rows not yet chosen.

  • and so on...

  • Step $b$: Choose the largest entry in the $b$th column, considering only the $I-(b-1)$ rows not yet chosen.

and it is easy to compute $E[Val_I^{greedy}]$ and $\lim_{I\rightarrow\infty} P[Val_I^{greedy} > b(c-\delta)]$ for all $\delta>0$.