I need to to evaluate the function
$f(x) = {1 - (1-A)^x \over A}$,
where $0 < A \leq 1$ and $0 \leq x \leq 1$.
A straightforward C implementation of $f(x)$ with floating-point arithmetic works fine as long as A is reasonably large, but it breaks down when A is tiny. $(1-A)^x$ is very close to 1, and subtracting it from 1 yields a small number with hardly any significant digits, which is then divided by the tiny A.
$f(x)$ appears to approach an identity function as A approaches zero, but I cannot prove that.
Is there some way to rearrange the definition of $f$ that avoids the loss of precision when A is tiny? Even better, is there a way to make it so that $f(x) = x$ for $A = 0$, without introducing a special case?
The simple thing to do is to say $(1-A)^x \approx 1-xA$ for $A \ll 1$. This leads to $f(x)=x$ as you say. You can test $A$ and change to $f(x)$ when it is small. The problem this leads to is you have a jump in $f(x)$ as you cross the transition. Maybe this is a problem, maybe not. You can use more terms of the Taylor series: $(1-A)^x \approx 1+x\log(1-A)+\frac 12 (x \log (1-A))^2 + \frac 16 (x \log (1-A))^3 +\ldots$ Then $\log (1-A)=-A-\frac 12A^2-\frac 13A^3-\ldots$. Keeping more terms will reduce the jump. If you need continuity, you can keep a bunch of terms, then near the transition do an interpolation.