How do 24 significant bits give from 6 to 9 significant decimal digits?

1.5k Views Asked by At

was reading IEEE 754 single-precision binary floating-point format: binary32 when I ran into

The IEEE 754 standard specifies a binary32 as having:

  • Sign bit: 1 bit
  • Exponent width: 8 bits
  • Significand precision: 24 bits (23 explicitly stored)

This gives from 6 to 9 significant decimal digits precision

I'm not really sure how this was calculated. could you please explain?

1

There are 1 best solutions below

3
On

Firstly, $\log_2(10) \approx 3.32$, so you need about that many bits per digit. So, you'd expect about $24/\log_2(10) \approx 7.2$ digits of precision, but that misses the trickiness here. For instance, consider the IEEE number $2^0 \times 1.000 000 000 000 000 000 000 00$ where we interpret the representation as being in binary. We would typically render this as $1.0$, but how many $0$s can we actually guarantee?

Well, that number could represent anything in the interval $[1 - 2^{-24}, 1+ 2^{-24}= 1.0000000596)$ so, we're okay to $7$ significant figures here.

However, the precision isn't going to be the same everywhere. There are places where the binary and decimal representations mesh well, and you get some extra digits, but there are places where they mesh poorly, and you need more bits than usual per digit. Working out where these are would be a good thing to make the machine do itself: 2^32 is only about 4 billion possibilities, and only 2 billion if you don't care about sign.