Why do we need numerical method of differentiation?

294 Views Asked by At

This link shows that we can calculate the derivative of a function using the following formula:

double derive(double (*func)(double), double x0)
{
    const double delta = 1.0e-6; // or similar
    double x1 = x0 - delta;
    double x2 = x0 + delta;
    double y1 = func(x1);
    double y2 = func(x2);
    return (y2 - y1) / (x2 - x1);
}

So, I have two questions in this regards:

  1. What is the origin of this formula?
  2. If we can use this formula to calculate the derivative, why do we need numerical methods?
1

There are 1 best solutions below

0
On BEST ANSWER
  1. This is the central divided difference approximation of the first derivative, $$\frac{f(x+h)-f(x-h)}{2h}=f'(x)+O(h^2).$$

  2. This is a numerical method. The way this computation is organized takes into account that in the computation of the function arguments $x_k=x\pm h+\delta_k$ there will be truncation errors. Using the formula $$\frac{f(x_1)-f(x_2)}{x_1-x_2}=\frac{f(x+h+δ_1)-f(x-h+δ_2)}{(x+h+δ_1)-(x-h+δ_2)}=f'(x)+O(h^2+|δ_1|+|δ_2|)$$ ensures that the errors in both parts of the fraction go in about the same direction, cancelling their effects. With $|δ_k|\le\mu(|x|+|h|)$, where $\mu$ is the floating point machine number, this error also is $O(h^2+\mu|x|)$. The floating point evaluation of $f$ adds another error term of size $\frac\mu h$ to the difference quotient, so that in the end there are terms $$h^2,~~\mu|x|,~~\frac{\mu}h$$ in the error that have to be balanced for an optimal error.