Simple loss of significance question.

59 Views Asked by At

I know that (because of the finiteness of computer arithmetic) for large values of $x$, the function $$f(x)=\log(x+1)-\log(x)$$ will be subtracting two very close values. Can I get around this by re-writing the function as $\log(1+\frac{1}{x})$? If not, what would be a more clever way of re-writing? Thank you in advance

1

There are 1 best solutions below

4
On

It might help a little, but it doesn't really resolve the issue, because $1+1/x$ is eventually $1$ in floating point arithmetic, so for large enough $x$ that implementation will still probably return zero (committing infinite relative error). This occurs for significantly smaller $x$ than the $x$ for which $1/x$ will return zero, so this implementation is suboptimal. (Once $1/x$ returns zero, there's not really anything you can do without increasing the working precision, since the quantity of interest will be zero to within the working precision anyway.)

However, that is a good first step. Having done that step, you can fix the new problem by using a specialized function that evaluates $\log(1+x)$ for small $x$ without actually computing $1+x$ in the middle. The usual name for this in programming languages is log1p.