When is Taylor approximation applicable?

3k Views Asked by At

I'm quite new to Taylor series and I have some questions.

  1. Given a function $f$, can it be approximated with Taylor polynomials ($n$-degree) iff $f$ is $n$ times differentiable. Right? Or should I think about the interval convergence of Taylor series?

  2. As the $n$ gets bigger, I can approximate further than the expansion point $x_0$, right?

4

There are 4 best solutions below

7
On

The answer to both questions is negative. A classical example is this one: considere the function $f\colon\mathbb{R}\longrightarrow\mathbb R$ defined by$$f(x)=\begin{cases}e^{-1/x^2}&\text{ if }x\neq0\\0&\text{ otherwise.}\end{cases}$$It can be proved that $f$ is $n$ times differentiable, for any natural $n$. And it can also be proved that $(\forall n\in\mathbb{N}):f^{(n)}(0)=0$. Therefore, the $n$th Taylor polyonomial of $f$ at $0$ is the null polynomial. So, it is clear that the Taylor polynomials don't get more and more close to $f(x)$ (for any $x\neq0$) as $n\to\infty$.

0
On

Given a function $f$, can it be approximated with Taylor polynomials ($n$-degree) iff $f$ is $n$ times differentiable.

No, a function can be infinitely differentiable, but the Taylor series can still fail to converge to it, meaning that the $n$th Taylor series can fail to be a good approximation.

The textbook example of this phenomenon is $$f(x)=\begin{cases}e^{-\frac{1}{x^2}} & x>0\\0 & x \leq 0\end{cases}.$$ At $x=0$ all its derivatives are $0$, but it is not the zero function. Notice that the part of the function defined on the positive real line would have a singularity (division by $0$ error) at $x=0$. That is important.

As the $n$ gets bigger, I can approximate further than the expansion point $x_0$, right?

Higher order Taylor approximations are indeed better approximations, as long as you are within the radius of convergence for a well-behaved function. For example the Taylor series for $f(x) = \frac{1}{1-x}$ is $1+x+x^2+\dotsb$. Its radius of convergence is $|x|<1.$ As long as you are considering $x$ values in this range, then higher order Taylor polynomials give better approximations, farther away from $0$.

But once you go past $x=1$, the Taylor series fails to converge, and the Taylor polynomials fail to approximate the function. Notice that the function has a pole at $x=1.$

If a function has a pole, this will typically spoil the nice convergence properties of the Taylor series of the function.

0
On

A standard counter-example is $f(x)= e^{-1/x^2}$, f(0)= 0. It is easy to see that this function is infinitely differentiable, the nth derivative of f is $e^{-1/x^2}$ times a polynomials and that all derivatives are 0 at x= 0. That is, the MacLaurin series, the Taylor series at x= 0, is identically 0 although f is 0 only at x= 0. A function that has a Taylor's series at a point, and that is equal to its Taylor's series in some neighborhood of a point is called "analytic" at that point. Obviously, if a function is "analytic" at a point, it is infinitely differentiable there but infinitely differentiable does NOT imply analytic.

0
On

In general Taylor’s theorem comes in a few different phrasings but in general it looks like this:

Given some function $f$ which is $n$ times differentiable at [or on a open set containing] $x$, we have: $$f(x+t) = f(x) + \cdots + \frac{f^{(n-1)}(x)t^{n-1}}{(n-1)!} + R_n(x,t)$$ where the error $R_n(x,t)$ is ...

Different phrasings may give different bounds on the error but as you have seen there are pathological cases where the error does not go to zero as $n\to\infty$. There is a theorem which says that any power series [potentially including negative powers] has a radius [resp. annulus (ring)] of convergence in the complex plane but just because the Taylor series converges to some value, it does not mean that the series converges to the correct value.

One phrasing of the theorem gives $R_n(x,t)=\frac{t^n}{n!}f^{(n)}(\xi)$ for some $\xi\in(x,x+t)$ which depends on $n,x,t.$ This implies that if one can bound all the derivatives of $f$ within some region as one may do with most functions, then the Taylor series must converge to the correct values.

A corollary to the fundamental theorem of applied maths (that “if it looks right then it is”) is that all Taylor series in the real world converge to the right values.