Proof of accuracy of converting decimal to binary floating-point number to decimal

131 Views Asked by At

As mentioned in the book, "we may think of its value not as exact but as exact within a factor of $1+\epsilon$. Thus for example, IEEE single format numbers are accurate to whitin a factor of about $1+10^{-7}$, which means that they have approximately seven significant decimal digits."

I understand the formulas and theorem 5.1.

From there, how to deduce that converting a decimal number (in the normalized range) to single-precision binary floating-point number and then back to decimal will preserve at least the first 6 significant digits?

A proof would be much appreciated.

enter image description here