I have a table consisting of a number of whole percentages $x_i$ between $0\%$ and $100\%$. However, they don't add up to $100\%$ (rather they add up to $101\%$). But they 'should'.
Assuming that any given percentage $x_i\%$ is rounded from some precise (unrounded) $y_i\%$ which is really uniformly distributed according to $y_i\sim \text{UNIF}(\max{(x_i-0.5\%,0\%)},\min(x_i+0.5\%,100\%))$, what is the best way to go about computing some estimates for unrounded $y_i$s, i.e. $\text{E}(y_i)$? I prefer expectation (over MLE).
NB: The table does contain some $x_i=0\%$ entries.
NB 2: The $\text{E}(y_i)$s that I am looking for are reals, not integers.
NB 3: Simply scaling doesn't work. For example take $10\%$, $80\%$ and $11\%$. Total $101\%$. Just scaling those down would obviously yield $100\%$. But $80\%\cdot \frac{100\%}{101\%}=79.2079\ldots\%$ will now round to $79\%$ instead of $80\%$. So, this cannot be right.
NB 4: Distributing the error equally over all entries has a drawback too. If the error would be $−1\%$, some such $\text{E}(y_i)$s (e.g. those belonging to $x_i=0$) could become less than zero. That indicates that that procedure cannot be right either.
You can't compute unrounded $y$s because you have thrown away the information by rounding. What you can do is to alter the $x$s so they add to $100\%$. You will then violate the fact that the $x$s are the rounded values closest to the $y$'s. If they currently add to $101\%$, you just need to decrease one of them by $1\%$. If you have the $y$s available, you can choose the one that is closest above $zz.50\%$, which seems the logical one to flip-it makes the least error. If you don't have the $y$s available, I would decrease the largest $x$, just because it introduces the least fractional error. But you might as well pick one at random.