Say I have a low-precision calculator that displays numbers with only 8-digits mantissa.
For example, 0.00432137378 would read 4.3213737e-3. Now I have a log10 button that computes the logarithm (base 10) of numbers.
I read from a book a cool trick to use this button in a more "accurate" manner:
Say you want to compute log10(101). Doing it naively would yield to 2.0043213. Instead, write 101 in scientific notation: 101 = 1.01e2. log10(1.01) reads: 4.3213737e-3. And since log10(101) = 2 + log10(1.01), you just gained 3 digits of accuracy: log10(101) = 2.0043213737.
I want to "prove" more formally this trick but I am not sure how to go about it, when the trick is valid, when is starts to be useless.
Basically, for every positive number $x$, we have $1 \leq 10^{-n} x < 10$ for some $n \in \mathbb{Z}$. Now, with my 8-digits precision calculator, let $\alpha$ be the truncated value of $ 10^{-n}x$ to the 7-th decimal and $\beta = \alpha + 10^{-7}$. I have $10^n \alpha \leq x < 10^n \beta$, and I can now write : $n + \log(\alpha) \leq \log(x) < n + \log(\beta)$. Since $\alpha \in [1, 10)$, $\log(\alpha)\in [0, 1)$ and we can the get extra precision. But shouldn't it work only when $x = 10^n \alpha$? What happens when I'm already off by $10^{-7}$ when truncating $10^{-n}x$ to the 7-th decimal? How does that play out with the log afterwards?
If somebody could help me prove more formally the bounds on the error of approximation I make in this simplified setting.