I am reading a paper which states that four-byte arithmetic has accuracy $\delta \sim 10^{-7}$. Now as I understand it there are 8 bits in a byte so that makes 32 and one bit is used for the sign, 8 bits for the exponent leaving 23 for the mantissa. I have found somewhere that the calculation required is $\log_{10}2^{23}=6.92$ but don't know why this is the calculation and how it relates to the δ given as that exponent is negative. I tried to ask this on stack-overflow but it said it wasn't the right place!
2026-03-26 14:06:12.1774533972
Arithmetic Precision
53 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
Floating point numbers are encoded as $\pm 2^{e}\cdot(1.m_1...m_{23})_2$.
Representable numbers between $1$ and $2$ have a distance of $2^{-23}=(8⋅1024^2)^{-1}$ in their sequence. The distance from any number in $[1,2]$ to a representable number is half that distance, $(16⋅1024^2)^{-1}$. This justifies to say that the accuracy is about 7 correct decimal digits.
This absolute accuracy in this interval transfers as relative accuracy to all other dyadic intervals.
One could carry this further by exploring the unique representability of numbers with up to 7 decimal digits, that is, if any number like $7.654321$ has a unique single precision floating point representation.