Compute finite series

51 Views Asked by At

The problem is to count the sum of the finite series $$\sum_{k=0}^{k_0} \frac{a_k}{b_k}$$ I need to count this series in binary with some precision, that would output $n$ correct binary digits after decimal point. I can only use integer division and addition and it is guaranteed that $\frac{a_k}{b_k}>0$ In order to do that I had first tried to count such series: $$\sum_{k=0}^{k_0} \frac{2^na_k}{b_k}$$ But I hadn't noticed that such situation may occur: $$\sum_{k=0}^{k_0} \{\frac{2^na_k}{b_k}\} > 1$$ * Here $\{\bullet\}$ denotes part after decimal point. So I assume that $$\sum_{k=0}^{k_0} \frac{2^n 2^{k_0+1} a_k}{b_k}$$ will give n precise binary digits after decimal point if we use integer division. The problem is I need to prove that. And I don't really know how to do that. Thank you

1

There are 1 best solutions below

0
On BEST ANSWER

If you take the coefficient of all terms to be $2^n2^m$ for sufficiently large $m$ then you will be able to successfully extract the first $n$ bits after the decimal point. $m = k_0 + 1$ will certainly work but $m = O(\log k_0)$ should also work. The point is that at most you will lose 1 bit of extra contribution at the $(n + m)$th place after the decimal point due to integer division, and so even if you get this maximum loss for all $k_0$ terms, the most you lose is $k_0 2^{-n-m}$ from the actual sum. So as long as $k_0 < 2^m$, you won't get a lost contribution capable of changing the first $n$ bits after the decimal point by more than $1$. The only question is whether you want to round to the nearest $n$ bits after the decimal point, or truncate. If you want to round, just use $n+1$ instead of $n$ and then use the value of the inferred $(n+1)$th bit to decide whether to round the first $n$ bits to the next value.