The exercise is the following:
We are given $n\in\mathbb{N}$ balls and $n$ buckets, where $i$-th bucket contains $a_i\geq 0$ balls. Let us define $S = \sum_i a_i$. At the beginning, we have $S\leq n$.
When we place the remaining $n - S$ balls, we have $b_i$ balls in $i$-th bucket. We compute the percentages $100b_i / n$ and round them to the nearest integer $p_i$ (e.g., $12.1$ is rounded to $12$ and $12.5$ is rounded to $13$).
What is the maximal possible sum $\sum_i p_i$?
The algorithm that solves the problem is the following:
- sort the buckets increasingly by the number of additional balls needed to round up the percentages from a bucket
- add the needed balls to the first bucket, add the needed balls to the second bucket, etc. while there are any remaining balls.
My question/claim is:
W(hy w)e can
- sort only the non-empty buckets in the first step, i.e. the buckets with $a_i > 0$,
- add balls to the nonempty buckets, and
- only then proceed to the empty buckets (if there are some remaining balls),
but still obtain the optimal sum?
For example, when $n = 13$, you need
- one additional ball in the empty bucket to round it up: $1/13 = 7.69\%$,
- two additional balls in the bucket with two balls, since $2/13 = 15.38\%$, $3/13 = 23.08\%$ and $4/13 = 30.77\%$
Therefore, it seems that the latter algorithm may lead to a suboptimal solution. However, I was not able to find a counter-example.
The reason for my question is that my code (by coincidence implementing the latter algorithm) successfully passed all test cases from a competition organised by Google ...