Avoiding loss of numerical accuracy

124 Views Asked by At

I need to to evaluate the function

$f(x) = {1 - (1-A)^x \over A}$,

where $0 < A \leq 1$ and $0 \leq x \leq 1$.

A straightforward C implementation of $f(x)$ with floating-point arithmetic works fine as long as A is reasonably large, but it breaks down when A is tiny. $(1-A)^x$ is very close to 1, and subtracting it from 1 yields a small number with hardly any significant digits, which is then divided by the tiny A.

$f(x)$ appears to approach an identity function as A approaches zero, but I cannot prove that.

Is there some way to rearrange the definition of $f$ that avoids the loss of precision when A is tiny? Even better, is there a way to make it so that $f(x) = x$ for $A = 0$, without introducing a special case?

2

There are 2 best solutions below

1
On

The simple thing to do is to say $(1-A)^x \approx 1-xA$ for $A \ll 1$. This leads to $f(x)=x$ as you say. You can test $A$ and change to $f(x)$ when it is small. The problem this leads to is you have a jump in $f(x)$ as you cross the transition. Maybe this is a problem, maybe not. You can use more terms of the Taylor series: $(1-A)^x \approx 1+x\log(1-A)+\frac 12 (x \log (1-A))^2 + \frac 16 (x \log (1-A))^3 +\ldots$ Then $\log (1-A)=-A-\frac 12A^2-\frac 13A^3-\ldots$. Keeping more terms will reduce the jump. If you need continuity, you can keep a bunch of terms, then near the transition do an interpolation.

0
On

Construct a polynomial (or rational) approximation of your function that gives good accuracy over the entire interval $[0,1]$, and evaluate this approximation instead of evaluating your original function.

Constructing a good approximation is a fair bit of work, so it may not be worthwhile for you. The best approach is Chebyshev approximation. You should be able to construct a polynomial of reasonably small degree that approximates your function adequately throughout $[0,1]$. With a bit more work, you can arrange for the approximation to be exactly correct at $x=0$ and $x=1$. No special-case branching is needed in your code, so this avoids the discontinuity problem mentioned in Ross Millikan's answer. In general, Chebshev approximations are nice because they are accurate throughout an entire interval, whereas Taylor series approximations are very good at a single point (and nearby) but poor elsewhere.

Constructing the approximation is easy if you have access to Matlab -- you can use the Chebfun package.

There is the added benefit that evaluating a polynomial will probably be faster than evaluating your original function.