Approximating a function's derivative in machine precision

638 Views Asked by At

Let's say I have a very complex analytical function $f$, in which finding its derivative at point $Q$, i.e. $f'(Q)$ is not practical. So we resort to finding the derivative numerically without analytically differentiate the function at point $Q$ (note that $Q$ is only a point, not an array).

Method 1: I have tried the $d$ order of finite difference. This doesn't give me a machine precision result when the order is too high or singularity behaviour occurs when steps are too small.

Method 2: Interpolation method. This doesn't give machine precision accuracy neither when differentiate the respected interpolant at point $Q$. i.e. $f'(Q) ≅ \sum_i L_i'(Q)f(x_i)$ where $L_i(Q)$ is the interpolant with basis $i$ and $x_i$ is the nodal point.

Method 3: Spectral method such as Legendre expansion and the like. No machine precision.

Method 4: least square polynomial fit. Just no.

Method 5: minimax polynomial approximation. Seems alright with around 13 decimal places accuracy in matlab.

Any better method? Suggestion? Alternatives?

I will be much appreciated. Thank you all!

3

There are 3 best solutions below

4
On BEST ANSWER

You may want to look into complex step finite differences. Specifically, one uses the approximation: $$f'(x) \approx \frac{\text{imag(f(x+ih))}}{h},$$ where $\text{imag(z)}$ is the imaginary part of $z$. You can get very close to machine precision. When I've done this I've had no trouble getting to accuracy of $10^{-15}$ or so. The main drawback is that you have to rework your code to work with complex numbers.

From a heuristic point of view, the method somehow takes advantage of the "extra" bits of precision associated with the imaginary component of a complex number. For a more rigorous view, there are advantages both in terms of reducing intermediate roundoff error due to cancellation (allowing one to take smaller step sizes), and in terms of a smaller error term in the series approximation that the approximation comes from. More details can be found here:

http://www.math.u-psud.fr/~maury/paps/NUM_CompDiff.pdf

Here is an example of using complex step differentiation to compute the derivative of $\sin(x)$ at $x=0.53$, written in Matlab/Octave:

f = @sin; 
x = 0.53; 
h = 1e-8; 
df_diff = (f(x+h) - f(x))/h; 
df_diff_complex = imag(f(x+1i*h))/h; 
df = cos(x); 
finite_difference_error = abs(df - df_diff)/abs(df)
complex_step_error = abs(df - df_diff_complex)/abs(df)

which, when run on my computer, yields the following results:

finite_difference_error =    1.2543e-08
complex_step_error =    1.2868e-16
0
On

Obtaining numerical derivative up to machine precision using real arithmetics only is hard due to roundoff errors.

If derivative is estimated using $d$-point differentiation method, then expected error using optimal step lenght is of order $\epsilon^\frac{d}{d+1}$ times term depending on derivative of order $d+1$.

0
On

If the precision is the priority (and speed is not) then consider what follows.

To increase the precision of numerical differentiation do the following:

1) Chose your favorite high-precision "standard" method based on some step size H.

2) Compute the value of the derivative with the method chosen in 1) many times with different step sizes h. Each time you may pick h as a random number from the interval (0.5*H/10, 1.5*H/10) where H is an appropriate step size for the method you use (yes, you should use smaller step sizes then the "optimal" one).

3) Average the results.

Your result may gain 2-3 orders of magnitude in the absolute error wrt. the non-averaged result.

https://arxiv.org/abs/1706.10219