I have a discrete optimization problem.
Let's say we have a shop that sells $N$ different stones. Each stone is characterized by 3 real numbers, say weight $a_i$, diameter $b_i$ and cost $c_i$:
$s_i = (a_i, b_i, c_i), \text{for } i = 1, 2, ..., N$
Now I want to buy some of these stones. That is I want to find a subset "$I$" of the numbers from $1$ to $N$
"$I$" is a subset of $\{1, 2, 3, ..., N\}$
I want to choose this subset in such a way that
1) The total weight is at least $A$: \begin{align} A < \sum_{i \in I} a_i \tag{1} \end{align}
2) The longest chain I can build with the chosen stones is at least B:
$B < \sum_{i \in I} b_i\tag{2}$
3) Within these constrains I minimize my total cost $C$:
$C = \sum_{i \in I} c_i \tag{3}$
$\min C$!
So the question is. Given $A, B, N$ and all parameters $s_i$, how to find the set "$I$" that minimizes the cost (3) under the constrains (1) and (2).
Can anyone hint me at how this kind of optimization problem would be called, and what approaches exist to solve it? I am trying to implement such an algorithm. N will in practice be somewhere between 100 and 1000. The parameters $a_i, b_i$ and $c_i$ will span several magnitudes of order (i.e. there will be many cheap small stones and a couple of 100 or 1000 times larger stones, and pretty much anything in between). I really need an algorithm suitable for practical application. Bonus points if anyone can tell me about a JavaScript implementation that solves this kind of problem :)
I found that the Knapsack problem looks pretty similar. However, it is not exactly the same. Can anyone describe a practical algorithm for this problem? I.e. one that runs in polynomial time w.r.t. $N$? Approximations on the $a_i, b_i$ and $c_i$ are perfectly fine (they are somewhat rough estimates to begin with...).
Edit
Thanks for the answers so far. This already helped me.
There is a minor extra complication, that both $a_i$ and $b_i$ can actually also take on negative values. However, that does not really change too much. One can adapt the algorithm suggested by Robert Israel.
Unfortunately the run-time is too high. Lets say (for simplicity) that rounding all weights to $1g$ precision is fine. Then the stones have a typical weight of $10g$ to $1000g$. If I deal with about $1000$ stones I end up at a total weight of something of the order of $100.000$. I get about the same for the diameters. So my total run-time is of the order of $100.000*100.000*1000 = 10^{13}$, which will in practice probably be something in the order of $100$ hours, where only a couple of minutes would be acceptable.
Does anyone have an Idea on how to improve the run-time further? I already thought in the direction of treating the smaller stones as a continuum somehow and to start rounding once I dealt with the small stones. But I am having trouble in finding a somewhat robust solution that would work independent on the details of how the masses are distributed an so on.
Another direction I am thinking in is more on the greedy-algorithm side: One could minimize $$C - p \sum_{i \in I} a_i - q \sum_{i \in I} b_i $$ without constraining to (1) and (2) and tune p and q in such a way that (1) and (2) are both barely satisfied. With that one probably gets a decent starting point. Maybe one can then fix the most "profitable" stones at this point and ignore the least "profitable" stones. One could then run the original algorithm suggested by Robert Israel, but with the fixed and ignored stones excluded (the fixed stones would then of course imply changes in A and B, so that adding them back in to the found solution yields the correct original A and B). Does this sound reasonable? Are there better ways to do this?
This likely solves quickly with a generic integer linear programming (ILP) solver. Do you have any sample data?
Explicitly, for $i\in\{1,\dots,N\}$ introduce binary decision variable $x_i$ to indicate whether stone $i$ is selected. The problem is to minimize $\sum_{i=1}^N c_i x_i$ subject to: \begin{align} \sum_{i=1}^N a_i x_i &\ge A\\ \sum_{i=1}^N b_i x_i &\ge B \end{align} After solving, take $I=\{i\in\{1,\dots,N\}: x_i = 1\}$.
One good thing about ILP is that you get lower and upper bounds so if you stop early you have a metric on the quality of the best solution obtained so far.