find the minimum value for the following expression $\max \{ {x_{i_1}, x_{i_2}-x_{i_1},x_{i_3}-x_{i_2} ... x_{i_n}-x_{i_{n-1}}, 1-x_{i_n} }\}$

159 Views Asked by At

Given the numbers $0<x_1<x_2<...<x_{n^2}<1$,

For each sub group of n of those elements: $x_{i_1}<x_{i_2}<...<x_{i_n}$

we will look at the value of the following expression : $\max \{ {x_{i_1}, x_{i_2}-x_{i_1},x_{i_3}-x_{i_2} ... x_{i_n}-x_{i_{n-1}}, 1-x_{i_n} }\}$

What would be the most efficient possible algorithm that will find the minimal value of that expression that a sub group containing $n$ variables could achieve?

2

There are 2 best solutions below

0
On BEST ANSWER

I have an algorithm for this, but I haven't been able to prove that it's optimal (in my experience, proving strong lower bounds for computational problems is typically very hard), so maybe this isn't the answer you're looking for.

Let's rephrase the problem: given numbers $n < m$ and $x_0 < x_1 < \cdots < x_{m-1} < x_m$, we want to find $$\min_{0 < i_1 < i_2 < \cdots < i_n < m} \max \{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_n} - x_{i_{n-1}}, x_m - x_{i_n}\}.$$ We can think of this as the problem of "cutting" the interval $[x_0, x_m]$ into $n+1$ blocks, where we are only allowed to cut in the prescribed places $x_i$, and we want to minimize the largest resulting block size.

Define $A(k, r)$ as the minimum largest block size resulting from cutting $[x_0, x_k]$ into $r+1$ blocks, i.e. $$A(k, r) = \min_{0 < i_1 < i_2 < \cdots < i_r < k} \max \{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_r} - x_{i_{r-1}}, x_k - x_{i_r}\}$$ so we want to find $A(m, n)$.

For $r > 1$, it is not hard to see that $$A(k, r) = \min_{r \leq j < k} \max\{A(j, r-1), x_k - x_j\}$$ since \begin{align*} A(k, r) &= \min_{r \leq i_r < k} \min_{0 < i_1 < \cdots < i_r} \max\{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_r} - x_{i_{r-1}}, x_k - x_{i_r}\} \\ &= \min_{r \leq i_r < k} \max \left\{\min_{0 < i_1 < \cdots < i_r} \max\{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_r} - x_{i_{r-1}}\}, x_k - x_{i_r} \right\} \\ &= \min_{r \leq i_r < k} \max \{A(i_r, r-1), x_k - x_{i_r}\} \\ \end{align*} This gives a pretty direct dynamic programming approach:

set all A(k, 1) = x_k - x_0
for r in {2, ..., n}:
    for k in {r+1, ..., m}:
        iterate over all j in {r, ..., k-1} to find the smallest max(A(j, r-1), x_k - x_j)
        set A(k, r) to this value

which clearly runs in $O(nm^2)$ steps. However, we can actually get this down to $O(nm \log m)$ steps by using binary search, as described below.

The important fact is that in the computation of $$A(k, r) = \min_{r \leq j < k} \max\{A(j, r-1), x_k - x_j\},$$ $x_k - x_j$ is a strictly decreasing function of $j$, while maybe less obviously, $A(j, r-1)$ is in fact an increasing function of $j$. To show this last part, note that $A(k, r)$ can also be defined as the minimum largest block size when we cut $[x_0, x_k]$ into at most $r+1$ blocks (that is, with at most $r$ cuts): making fewer cuts cannot decrease the largest block size. Then for $j' < j$, if we have a choice of at most $r-1$ cuts of $[x_0, x_j]$, this gives a choice of at most $r-1$ cuts of $[x_0, x_{j'}]$ (namely those cuts lying in the smaller interval), and the largest block size cannot increase, so it follows that $A(j', r-1) \leq A(j, r-1)$.

Thus, if we consider $f(j) = \max\{A(j, r-1), x_k - x_j\}$ as a function of $j \in \{r, \dots, k-1\}$, we can break $\{r, \dots, k-1\}$ into the two regions $\{r, \dots, t-1\}$ where $x_k - x_j$ is larger, and $\{t, \dots, k-1\}$ where $A(j, r-1)$ is larger (though one of these may be empty), so that $f$ is decreasing on the first region, and increasing on the second region. This means the minimum must occur at either $t-1$ or $t$, and one of these must be the point at which $g(j) = A(j, r-1) - (x_k - x_j)$ (a strictly increasing function) is closest to 0.

Now, since $g(j)$ is strictly increasing, we can find the index $j_0$ where $g(j)$ is closest to $0$ by binary searching for $0$ in the list $\{g(r), g(r+1), \dots, g(k-1)\}$. Importantly, we never explicitly construct this list, we only construct the values $g(j)$ when we query them, to save time, so this takes $O(\log m)$ steps. Then the index $j^*$ which minimizes $f(j)$ is among $j_0-1, j_0, j_0+1$, so we set $A(k, r)$ to be the smallest of $f(j_0-1), f(j_0), f(j_0+1)$. Since we take $O(\log m)$ steps to compute each $A(k, r)$, it takes $O(nm\log m)$ steps to compute all $A(k, r)$, and in particular, $A(m, n)$.

Thus in the relevant case, where $m = n^2 + 1$, and $x_0 = 0$, $x_m = 1$, the algorithm takes $O(n(n^2 + 1)\log(n^2 + 1)) = O(n^3 \log n)$ steps.

0
On

I would do it with linear programming : $$ \min z $$ subject to \begin{align*} z&\ge x_{i_1} \\ z &\ge x_{i_2}-x_{i_1} \\ &... \\ z &\ge 1-x_{i_n} \\ \end{align*}