Number of significant figures when going from base 10 to binary.

1.8k Views Asked by At

What is the correspondence between the number of significant figures in base 10 and base 2?

Or in other words, if two decimal numbers are equal to $N_{10}$ significant figures, what is the number of significant figures, $N_{2}$, to which I need to compare them in binary, so that I get the same result?

Example: Let $N_{10} = 2$, i.e. we will be comparing to $2$ significant figures in base 10. In this case: $$ 100 = 102 \text{ (both round to 100)} $$ $$ 105 = 114 \text{ (both round to 110)} $$ In binary $100_{10} = 1100100_{2}, 102_{10} = 1100110_{2}, 105_{10} = 1101001_{2}, 114_{10} = 1110010_{2}$. From this particular example, it appears that $N_{2} = 3$.

Why I ask this: I am trying to implement efficient equality comparison which respects number of significant figures. Ultimately, I would want it to achieve everything in binary using cheap shift operations.

3

There are 3 best solutions below

6
On BEST ANSWER

For fixed $N_{10}$, no such $N_2$ exists.

Proof:
Given a tentative $N_2$, set $n = 2^{N_2} + 1$ (two $1$s with $(N_2 - 1)$ $0$s between). Then in base $2$, $n\cdot 2^k$ always rounds up to $(n+1)\cdot 2^k$. However $n\cdot 2^k - 1$ rounds down to $(n-1)\cdot 2^k$. Since $n\cdot 2^k$ and $n\cdot 2^k - 1$ round to the same $N_{10}$ significant digits in base $10$ for at least one $k$, $N_2$ does not work.

0
On

Significant figures are a rough way of assessing the relative accuracy of a number. $98$ and $11$ each have two significant figures, but if you assign a maximum absolute error of $\pm 0.5$ to each, the relative error is about $0.5\%$ in the first case and $4.5\%$ in the other. A factor of $3$ is in line with the roughness of the approximation. Even in a single base, you can have problems with carries. If the true value is $9.999994 \pm 0.000002$, it could still round to $10$, but that small error would usually give you at sx significant figures for a number of order $10$.

0
On

There is a "cheap" programmer's trick that does the job, although I hope that someone comes up with a better mathematical answer.

We consider the basic task of rounding to $10^n$, with $n$ an integer (and not necessarily positive)

The formula for rounding $x$ is then

$$\lfloor \frac{x+ 10^n/2}{10^n} \rfloor 10^n$$

These are all easy bit manipulations except for the division inside the floor function. ($10^n$ is a constant, division by 2 is a left shift, flooring a value is simply truncating a string of bits)

The idea is that we can approximate division (to an arbitrary precision) by using division and multiplication. For the rounding above, we wish to approximate division by $10^n$.

Say we wish to divide by $10$. We can divide by $2^{20}=1048576$ instead, and then multiply by $1048576/10 = 104857.6 \approx 104858$. This gives approximately 5 significant digits. In other words, we multiply by 104858 and then shift left by 20, and truncate.

In essence, if we know the amount of significant digits ahead of time, we can pre-process a simple division/multiplication algorithm which will round for us.

The number of total significant digits (for division, not for rounding) is equal to the power of two we use divided by the power of $10$ used in the division.

This isn't much worse than some of the best division algorithms out there. Plus, it is extremely simple.