How to find the minimum number of vectors in $\mathbb{N}^n$ to exceed a given sum?

166 Views Asked by At

Given a finite multiset $A$ of vectors $a_1,\dots,a_n\in \mathbb{N}^n$ where $n\geq 1$ and a target vector $b\in \mathbb{N}^n$, how can we find a smallest subset $S\subseteq A$ for which $\sum_{s\in S} s \geq b$ where $\geq$ denotes the component-wise comparison?

For vectors in $\mathbb{N}$, this is trivial as we can just sort $A$ descendingly and take the first $k$ elements such that their sum exceeds $b$ and $k$ is as small as possible. But how about higher dimensions?

My idea was the following: First define a distance function which "ignores" all coordinates of a vector which have already exceeded the target vector, i.e. $D(u,v):=d((u_{k_1},\dots,u_{k_m}),(v_{k_1},\dots,v_{k_m}))$ for only the $k_i \leq n$ such that $u_{k_i} < v_{k_i}$. Here, little $d$ is just the Euclidean metric (potentially in a smaller vector space since we "forget" coordinates). For vectors already exceeding the target vector in every component, $D$ is just defined to be zero. Now set $x$ to be the zero vector and repeatedly add an $a\in A$ to $x$ which minimises the distance $d(x,b)$, i.e. for every $a'\neq a$, $d(x+a',b)\geq d(x+a,b)$, until $d(x,b)=0$. After adding some $a$, an instance of it needs to be removed from the multiset $A$ of course.

Right now, I am trying to prove or disprove that this works. Has anyone got ideas or another strategy?

2

There are 2 best solutions below

0
On BEST ANSWER

You can solve the problem via integer linear programming as follows. For each $s\in A$, let binary decision variable $x_s$ indicate whether $s\in S$. The problem is to minimize $\sum_{s\in A} x_s$ subject to linear constraints $$\sum_{s\in A} (a_{s})_i x_s \ge b_i \quad \text{for $i\in\{1,\dots,n\}$}.$$

2
On

Found the connection. This problem is at least harder than Set Cover optimization problem with same $n$. Reason being if we have a collection $S$ of subsets from $\{1,2,..., n\}$, we can make tuples $t_m$ for each chosen subset, so that the subset $(i_1, ..., i_k)$ would be connected to a tuple $t_m$ that would have a $1$ in the positions $i_1, ..., i_k$ and a $0$ elsewhere. In this case, $A$ would be defined as the collection of all tuples $t_m$ defined in this process. An example, if $n$ was $5$ and $$S = \{\{1,2,3\}, \{2,4\}, \{3,4\}, \{4,5\}\},$$ then the new tuples would be $$A = \{ (1,1,1,0,0), (0,1,0,1,0), (0,0,1,1,0), (0,0,0,1,1)\}.$$

If you had an algorithm to solve your problem with $b=(1,1,..., 1)$ and this $A$, then you could apply it to this subset and solve the Set Cover problem, because you would have minimized the number of sets that cover your $b$, and this would be the solution to the original problem.

So it's, at least, worse to solve than NP-hard. Now it would be nice to know if you can use a Set Cover algorithm to solve your problem, that is, going in the opposite direction to prove that it is actually NP-hard. But right now I don't see how to prove that, nor know if it's even true.