When does the remainder term in the taylor series go to zero?

6.6k Views Asked by At

When does the remainder term in the taylor series go to zero?

Theorem: Let $f\in C^{N+1}([\alpha,\beta])$ and $x,x_0\in(\alpha,\beta)$. Then

$$f(x)=f(x_0)+f'(x_0)(x-x_0)+\frac{1}{2}f''(x_0)(x-x_0)^2+...+\frac{f^{(N)}(x_0)}{N!}(x-x_0)^N+\frac{(x-x_0)^{N+1}}{N!}\int_0^1 (1-t)^Nf^{(N+1)}(x_0+t(x-x_0))dt$$

Where the remainder term, $R_N$, is

$$R_N=\frac{(x-x_0)^{N+1}}{N!}\int_0^1 (1-t)^Nf^{(N+1)}(x_0+t(x-x_0))dt$$

I'm really not understanding this concept, especially because I'm struggling to comprehend what $R_N$ actually means.

3

There are 3 best solutions below

2
On BEST ANSWER

There is no easy answer to the question of how to prove that the remainder term goes to zero. It is an art. The art of bounds, the mathematical art known as "Analysis".

If you try some examples you may begin to develop a mastery at this art. For instance, try $f(x) = \cos(x)$. The absolute value of the integrand is $$\bigl| \, (1-t)^Nf^{(N+1)}(x_0+t(x-x_0)) \, \bigr| $$ This is easy to bound. Since all derivatives of $\cos(x)$ are just $\pm\sin(x)$ or $\pm\cos(x)$, and since $0 \le t \le 1$, it follows that the integrand is $\le 1$ in absolute value, for all $t$. So, integrating between $0$ and $1$, and using that $\bigl| \int (blah) \bigr| \le \int |blah|$, we get $$|R_N| \le \frac{|x-x_0|^{N+1}}{N!} $$ Since $|x-x_0|$ is independent of $N$, this comes down to a standard limit problem that you should know from calculus, namely $$\lim_{N \to \infty} a^N/N!=0 $$

5
On

Hint: The Taylor series is sometimes called a Taylor polynomial. What is the Taylor series of a polynomial function?

8
On

The idea behind Taylor series is to approximate an $N+1$-times continuously differentiable function at a point by using the information given by the derivatives at that point. For example, when a function is differentiable and its derivative is continuous, we can give a linear approximation of this function at any point. This is the first order Taylor approximation.

Of course we cannot reconstruct the whole function (say, $f$) from it's Taylor approximation of any finite order, since information about $f$ is lost. In particular, when a function is not infinitely differentiable, we only have the Taylor approximation available up to a certain order (namely, $N$). The remainder represents the lost information, and can be defined precisely. However Taylor's theorem states that the higher the order of the approximation, the more negligible the remainder is, when close to the point of approximation, $x_0$. Technically, this can be stated precisely using orders of convergence.