Why does the interpolation error go to zero if we increase the number of sampling points?

272 Views Asked by At

This question is motivated by polynomial interpolation. We know that for $f\in C^{n+1}[a,b]$ and $a=x_0<\dots<x_n=b$ holds

$$\| f - p_n \|_\infty \leq \frac{1}{(n+1)!} \| f^{(n+1)} \|_\infty \max_{x\in[a,b]}|(x-x_0)\cdots(x-x_n)|$$

My lecturer remarked that if the sequence $\left(\| f^{(n)} \|_\infty \right)_{n\in \mathbb N} $ is bounded then the interpolation error goes to zero if we increase the number of sampling points, that implies that

$$\frac{1}{(n+1)!}\max_{x\in[a,b]}|(x-x_0)\cdots(x-x_n)| \xrightarrow{n\to\infty} 0$$

I don't quite see why should it be the case. For any given $x_0, \ldots, x_n$, finding the maximum is quite tedious and I'm aware of any way to bound it.

1

There are 1 best solutions below

0
On BEST ANSWER

You don't have to find the maximum; a crude upper bound suffices in this case. Since $|x-x_i|\le |b-a|$ for every $i=0,1,\dots,n$, it follows that $$|(x-x_0)\cdots(x-x_n)| \le (b-a)^{n+1}$$ When divided by $(n+1)!$, this goes to zero: the factorial grows super-exponentially.


The above estimate also hints that we can allow certain growth of derivatives, and that the convergence may have something to do with the convergence of the Taylor series of $f$.