Is adding and subtracting 1 some generic method of removing numerical instability?
Like in:
Unstable: $$\frac{\log(1+3 \cdot 10^{-16})}{3 \cdot 10^{-16}}$$
Stable:
$$\frac{\log(1+3 \cdot 10^{-16})}{1+3 \cdot 10^{-16}-1}$$
The correct answer (of the first expression) is $\approx 1$.
Computing these in Python 3:
import math
math.log(1+3*pow(10,-16))/(3*pow(10,-16))
# > 0.7401486830834376
math.log(1+3*pow(10,-16))/(1+3*pow(10,-16)-1)
# > 0.9999999999999999
For a definition of numerical stability, I refer to:
What you are experiencing is pure coincidence, and a nice example of how one example cannot be used to draw general conclusion.
When you write
the program first calculates the expression on the right and gets a value of
(1+3*pow(10,-16)-1) = 2.220446049250313e-16, which is quite a ways off from the actual value of3e-16.Then it calculates the expression on the left and gets
math.log(1+3*pow(10,-16)) = 2.2204460492503128e-16which again, is a long way off from the actual value which should be close to3e-16.The reason the two errors are the same is that calculating the logarithm is probably done by using its power series, i.e.
$$\ln(1+x)=x+\frac12 x^2+\cdots$$
which is reworked into
$$\ln(x) = \ln(1+(x-1)) = (x-1) + \frac12 (x-1)^2 + \cdots$$
so you are, when calculating
log(1+3e-16), forcing the software to calculate(1+3e-16) - 1, the exact same computation (resulting in the exact same error) it did when calculating the denominator.It's by pure coincidence that you discovered a function (i.e.,
log), that makes the same numerical error as the addition function, so the two errors cancel out.In general, you should expect a bigger error when calculating $$\frac{f(x)}{1+x-1}$$ when compared to $\frac{f(x)}{x}$ for small values of $x$, because calculating $1+x-1$ will already result in sometimes big errors.
For example, taking $f(x) = \sin x$ should show you that your modified calculation is not, in general, more stable.