Reducing column ranges of a matrix

104 Views Asked by At

I'm looking for an algorithm to reduce the sum of column ranges in a sparse integer matrix by subtracting $1$ from all nonzero elements in a subset of the rows.

Let $R = 1, \ldots, m$ and $C = 1, \ldots, n$. Let $N \subset R \times C$ and $f: N \rightarrow \mathbb{Z}$.

Write $r(c) := \{ i \in R : (i, c) \in N\}$, the rows with a nonzero value in column $c$, and define

  • $\overline{r(c)} := \{ i \in r(c) : f(i, c) = \max_{j \in r(c)} f(j, c) \}$
  • $\underline{r(c)} := \{ i \in r(c) : f(i, c) = \min_{j \in r(c)} f(j, c) \}$

Let $S \subset R$, the rows from which $1$ will be subtracted. Then the change in range for column $c$ is given by $$ p_c(S) := \begin{cases} -1 & \text{if} \quad \overline{r(c)} \subseteq S \wedge \underline{r(c)} \subseteq S^c\\ 1 & \text{if} \quad \underline{r(c)} \cap S \neq \varnothing \wedge \overline{r(c)} \cap S^c \neq \varnothing\\ 0 & \text{otherwise} \end{cases} $$

That is, if the maximum column values get $1$ subtracted while the lowest values stay the same, the range is 1 smaller. That range might also increase, if $1$ is subtracted from a lowest value in a column one of the largest values stays the same.

I'm looking for any $S \subset R$ such that $\sum_{c \in C} p_c(S) < 0$.

Heuristics are appreciated too! Note that $m \ll n$, $|N| \ll |R \times C|$ and that $f$ has a thin-tailed distribution.