Why do we interpolate - no guarantee of success

383 Views Asked by At

this is somewhat of a general question about interpolation, I don't fully understand how can we be confident that our approximation is good, even if we know a lot of points.

An example would be:

Suppose we know that $x_0=100, f(x_0)=10$, $x_1=121, f(x_1)=11$, $x_2=144, f(x_2)=12$, and we want to approximate $f(\alpha)$ where $\alpha=115$ for example.

using newton's method of interpolation with divided differences, I got that the interpolating polynomial is

$P_{newton}(x)=10+\frac{x-100}{21}-\frac{(x-100)(x-121)}{10626}$

And so, $P_{newton}(\alpha) \approx 10.7227551$ using a calculator.

But how do we know its close to $f(115)$? Maybe our original $f$ was $f(x)=\{\begin{matrix} \sqrt{x}, x\neq 115 \\-(10^8) , x=115\end{matrix}$ and we are nowhere close to that.

So what good is interpolation if it doesn't give us any information about the function we are trying to investigate?

2

There are 2 best solutions below

0
On

Interpolation has more possible applications than what you mentioned.

But at first I would like to adress your concrete problem. You are completely right, that your interpolating function is not possible to reproduce the value at $x=115$ if we do not include that value as a given point. However, this shouldn't be a problem since

  • if $x=115$ is just some arbitrary point that isn't of greater interest, we are also not interested in the value of $f(x=115)$.
  • if $x=115$ is no arbitrary point because it is of greater interest, we should have included it in the construction of our interpolant a priori.

But more important is the fact, that functions of the given type are not the type of functions we are interested in when we apply interpolation. Interpolation makes only sense when we have certain smoothness assumption. Otherwise we would try to interpolate a function whose values are arbitrary in the first place.

Coming back to my inital statement, let me give you another application of interpolation. Interpolation is often used for intergration. This means, that we interpolate a function (e.g. by polynomials) and integrate the interpolant afterwards ( which can be analytically done for polynomials ). In this case, it wouldn't matter, if single points are interpolated completely wrong, since the functions \begin{align} f(x) = \begin{cases} \sqrt{x}\quad x\not = 115 \\ -10^8\quad x = 115 \end{cases} \end{align} and \begin{align} g(x) = \sqrt{x} \end{align} have the same integral. Our guess for $P_{newton}$ is sufficient.

So summing up, interpolation has its right to exist, since

  • if we are interested in the evaluation of our polynomial at single arbitrary values we have to assume that the function is smooth in the first place
  • if we are not interested in the evaluation of our polynomial at single arbitrary values it also doesn't matter that these points are interpolated wrong
1
On

I'll add something as well, possibly expanding on something said in the comments.

The theory behind interpolation is not developed for arbitrary functions. Indeed, if we made no restrictions, functions such as the Dirichlet funciton ($f(x) = 1_{\Bbb Q(}(x)$, the indicator of the rational points set) would prove to be a large problem.

We therefore restrict ourselves to smooth functions. When we interpolate using a single polynomial, we usually assume the function to be $C^{\infty}$, or at least as many times differentiable, as the number of points we have. Say we have sampled the function at points $\{x_0,...,x_n\}$, found the appropriate interpolation polynomial $p_n(x)$, and are interested in the value at $x$. Then we have an expression for the error

$$f(x) - p_n(x) = \frac{f^{(n+1)}(\zeta)}{(n+1)!} \prod_{i=0}^n (x-x_i)$$

Where $\zeta$ is some unknown point on the interval $[x_0, x_n]$ (assume the points are ordered). You can see, that if we assume some bound $M$ on all the derivatives on the interval, and simply increase the number of points, we can make the expression small due to the factorial in the denominator.

However, as sonystarmap has mentioned in his answer, usually interpolation such as this is used for other purposes, such as integration. What is done in practice more often when you have many points, and want to know the asnwer "in the middle", is spline-fitting. A spline is basically interpolating each several neighboring points by a low-order polynomial, and then "gluing" all the polynomials in a continuous way (and usually also with matching first- and second- order derivatives).

The simplest example is making linear interpolation between each two neighboring points (that is, you simply "connect the dots" that you have measured). In this case we will only ask that the function is $C^2$.

Assume that the sampled points are equispaced, $x_{i+1} - x_i = h$. Then, any point you are interested in lies between some two consecutive points $x_i, x_{i+1}$. You can use the very same interpolation error from before for $n=1$ to get:

$$f(x) - p(x) = \frac{f''(\zeta)}{2}(x-x_i)(x-x_{i+1}) \leq \frac{M_2h^2}8$$

Where $M_2$ is a bound on second derivative. As you increase the number of points, $h$ gets smaller, and the error bound - better.

These examples show, that making just general assumptions we can, in fact, guarantee any precision set beforehand. (Up to the bounds on the derivatives, which we seldom know precisely in practice.)