I have this function
$$f(x, t)=\frac{\left(1+x\right)^{1-t}-1}{1-t}$$
Where $x \ge 0$ and $t \ge 0$.
I want to use it in neural network, and thus need it to be differentiable. While it has a discontinuity at $t = 1$, the $lim_{t \rightarrow 1}{\frac{\left(1+x\right)^{1-t}-1}{1-t}}=log(1+x)$ is well defined, so I patch up $f$ like this:
$$ f(x, t) = \left\{ \begin{aligned} &\frac{\left(1+x\right)^{1-t}-1}{1-t}&, t \ne 1 \\ &log(1+x)&, t = 1 \end{aligned} \right. $$
For infinite precision math we are done, but for floats this is still numerically unstable around $t = 1$.
What can be done about that? Can the expression be transformed somehow to avoid it?
When computing $1-t$ for $t \to 1$ in finite-precision floating-point arithmetic, the result is exact per Sterbenz lemma.
This leaves us with problems of subtractive cancellation. These can be addressed by making use of $\mathrm{expm1}(x) := \exp(x)-1$ and $\mathrm{log1p}(x) := \log (1+x)$. Most programming environments offer standard math functions with exactly those names, that is,
expm1()andlog1p(). Based on these, one computes:$$ f(x, t) = \left\{ \begin{aligned} &\frac{\mathrm{expm1}((1-t)\mathrm{log1p}(x))}{1-t}&, t \ne 1 \\ &\mathrm{log1p}(x)&, t = 1 \end{aligned} \right. $$