Significance of the n+1-th term in Taylor Series Approximations

67 Views Asked by At

I've been working with Taylor series for a short period now and have done several excersises where you are supposed to calculate the remainder term. As I have learnt it, the remainder term is defined as:

$R_n(x) = \frac{f^{(n+1)}(z)}{(n+1)!} x^{n+1}$

What I don't understand is why the n+1 remainder is important. Since the series theoretically continues infinitely, why do we focus on the $n+1$-th term when determining the accuracy of the approximation? Shouldn't the rest term account for all terms beyond the $n+1$-th order? Like also $n+2$, $n+3$, and so on? Why are these not part of the remainder term?

Thanks in advance for the help!

1

There are 1 best solutions below

2
On

The remainder term provides a much easier way to show that your Taylor polynomial of degree $n$ for a function $f(x)$ $$P_n(x) = \sum_{i=0}^n \frac{f^{(i)}(x)x^i}{i!}$$ is close to $f(x)$ without computing the entire Taylor series, which could be prohibitively expensive or numerically impossible.

Let's suppose we are building a calculator app and we want to compute $\sin(x)$ for $x \in [0,\pi/2]$ (we can reduce the other cases to this one.) It has the Taylor series expansion $$\sin(x) = x -x^3/3! + x^5/5! - ...$$ The limit to an IEEE single precision float is $10^{-8}$, and the limit to double precision is $10^{-16}$: if we work with numbers smaller than these, our (normal) computer can't handle them accurately or at all. If we were say, computing $\sin(1/10), x = 1/10$, computing any term past $\frac{x^{17}}{17!}$ would literally result in 0: we can't get a more accurate approximation of $\sin(1/10)$. In this case, without other methods of computing $\sin(x)$, it's impossible to compute $\sin(1/10)$ exactly, so we have to have some justification for why $P_n(x)$ would be close to $\sin(x)$. Using the remainder term immediately allows us to conclude $$|P_n(x) - \sin(x)| \leq \frac{|x|^{n+1}}{(n+1)!}$$ and so for any $x \in [0,\pi/2]$, to get the most accurate approximation of $\sin(x)$ to single precision, it only suffices to compute $P_{15}(x)$. With this example, it's a little silly because $\sin(x)$ has a nice alternating Taylor series that converges everywhere, but it really makes things easier to estimate.

A more common example of using the remainder theorem is to evaluate limits, as in the central limit theorem. If you have the characteristic function $\phi$ of a random variable $X$ with finite expectation and variance equal to 1 (with say, bounded third moments) given by $$\phi(t) = \mathbb{E}[e^{itX}]$$ your goal is to evaluate the characteristic function of $(X_1 + X_2 + .... + X_n)/\sqrt{n}$, which turns out to be $(\phi(t/\sqrt{n}))^n$. Trying to compute this limit is fairly difficult unless you Taylor expand $\phi$ around 0: once you know that all of the remainder terms of $\phi(t/\sqrt{n})$ can be lumped into a remainder term of magnitude around $C \cdot t^3/n^{3/2}$, computing the limit becomes a trivial affair. Without that uniform bound on the remainder, you would be lost trying to compute that many terms.