Non-Optimality of First-Fit-Decreasing Algorithm for Bin Packing

655 Views Asked by At

The First-Fit-Decreasing algorithm solves the bin packing decision problem for given weights $w_1,\dotsc,w_n\in [0,1]$ and number of bins $k$ in quadratic time. This would mean that the bin packing decision problem is in P, which would show P=NP, since the problem is NP-complete.

But since that is not the case, there must be instances of $w_1,\dotsc,w_n\in [0,1]$ and $k$ for which the algorithm makes the wrong decision.

However I could not identify such a counterexample nor see this issue stressed anywhere. Can someone please elaborate?

1

There are 1 best solutions below

0
On BEST ANSWER

Consider a bin size of 10 with item sizes 5, 4, 3, 2, 2, 2, 2, 2. This could be partitioned into two bins (5, 3, 2 and 4, 2, 2, 2) but the First-Fit-Decreasing method would require 3 bins, since it would place the 5 and the 4 together, thereby wasting 1 unit of space in the first bin.

Edit: the weights can be made to be in the range $[0, 1]$ by scaling down via a factor of 10.