When do Taylor series provide a perfect approximation?

13.1k Views Asked by At

To my understanding, the Taylor series is a type of power series that provides an approximation of a function at some particular point $x=a$. But under what circumstances is this approximation perfect, and under what circumstances is it "off" even at infinity?

I realize is a little hazy so I'll rephrase: By "perfect" I refer to how a regular limit doesn't ever actually reach something but instead provides a sort of error term that you can make as small as you want, so for all practical purposes we treat it as zero error. Whereas for something that is an imperfect approximation maybe that arbitrarily small error piece doesn't exist, or maybe the function is only correct for that particular point and nowhere else, etc.

So maybe what I am asking is when the Taylor series provides an equivalent representation of the function over all $x$ in $f$'s domain, and when does it not? And when it doesn't, how do we even know?

6

There are 6 best solutions below

0
On BEST ANSWER

Limits are exact

You have a misunderstanding about limits! A limit, when it exists is just a value. An exact value.

It doesn't make sense to talk about the limit reaching some value, or there being some error. $\lim_{x \to 1} x^2$ is just number, and that number is exactly one.

What you are describing — these ideas about "reaching" a value with some "error" — are descriptions of the behavior of the expression $x^2$ as $x \to 1$. Among the features of this behavior is that $x^2$ is "reaching" one.

By its very definition, the limit is the exact value that its expression is "reaching". $x^2$ may be "approximately" one, but $\lim_{x \to 1} x^2$ is exactly one.

Taylor polynomials

In this light, nearly everything you've said in your post is not about Taylor series, but instead about Taylor polynomials. When a Taylor series exists, the Taylor polynomial is given simply by truncating the series to finitely many terms. (Taylor polynomials can exist in situations where Taylor series don't)

In general, the definition of the $n$-th order Taylor polynomial for an $n$-times differentiable function is the sum

$$ \sum_{k=0}^n f^{(k)}(x) \frac{x^k}{k!} $$

Taylor polynomials, generally, are not exactly equal to the original function. The only time that happens is when the original function is a polynonial of degree less than or equal to $n$.

The sequence of Taylor polynomials, as $n \to \infty$, may converge to something. The Taylor series is exactly the value that the Taylor polynomials converge to.

The error in the approximation of a function by a Taylor polynomial is something people study. One often speaks of the "remainder term" or the "Taylor remainder", which is precisely the error term. There are a number of theorems that put constraints on how big the error term can be.

Taylor series can have errors!

Despite all of the above, one of the big surprises of real analysis is that a function might not be equal to its Taylor series! There is a notorious example:

$$ f(x) = \begin{cases} 0 & x = 0 \\ \exp(-1/x^2) & x \neq 0 \end{cases} $$

you can prove that $f$ is infinitely differentiable everywhere. However, all of its derivatives have the property that $f^{(k)}(0) = 0$, so its Taylor series around zero is simply the zero function.

However, we define

A function $f$ is analytic at a point $a$ if there is an interval around $a$ on which $f$ is (exactly) equal to its Taylor series.

"Most" functions mathematicians actually work with are analytic functions (e.g. all of the trigonometric functions are analytic on their domain), or analytic except for obvious exceptions (e.g. $|x|$ is not analytic at zero, but it is analytic everywhere else).

2
On

Recall that the Taylor expansion is only a local approximation around a point. A real valued function $f$ on an open subset $U$ of $\mathbb{R}$ is called analytic if for all $x \in U$ there is some $r_x > 0$ such that the Taylor expansion at $x$ approximates $f$ perfectly on $(x-r_x,x+r_x)$ (i.e.: it converges and coincides with $f$). In general, it is not so easy to see when a function is analytic. In particular, it is necessary but NOT sufficient that the function is smooth (i.e., that all derivatives exist and are continuous). See for instance here.

As a side remark: all this is very much in contrast to complex analysis, where being analytic is equivalent to being differentiable.

4
On

Suppose we have a smooth function $f$. Its Taylor series at a point $a$ is the series$$\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n.\tag1$$So, what you want to know is this: when does this series converge to $f(x)$ for each $x$ in the domain of $f$. In order to determine that, we study the remainder of the Taylor series, which is$$f(x)-\sum_{k=0}^n\frac{f^{(k)}(a)}{k!}(x-a)^k.$$Given $x$ in the domain of $f$, the series $(1)$ converges to $f(x)$ if and only if the limit of the sequence of the remainders is $0$. This is what happens (for every $x$) in the case of the exponential function, the sine function or the cosine function.

The worst case is when you have a function such as$$f(x)=\begin{cases}e^{-1/x^2}&\text{ if }x\neq0\\0&\text{ if }x=0.\end{cases}$$In this case, the Taylor series at $0$ is just the null series, which converges to $f(x)$ when (and only when) $x=0$.

1
On

This is the fundamental question behind remainder estimates for Taylor's Theorem. Typically (meaning for sufficiently differentiable functions, and assuming without loss of generality that we are expanding at $0$), we have estimates of the form $$ f(x) = \sum_{n = 0}^N f^{(n)}(0) \frac{x^n}{n!} + E_N(x)$$ where the error $E_N(x)$ is given explicitly by $$ E_N(x) = \int_0^x f^{(N+1)}(t) \frac{(x-t)^N}{N!} dt,$$ though this is frequently approximated by $$ |E_N(x)| \leq \max_{t \in [0,x]} |f^{(N+1)}(t)| \frac{x^{N+1}}{(N+1)!}.$$

A Taylor series will converge to $f$ at $x$ if $E_N(x) \to 0$ as $N \to \infty$. If the derivatives are well-behaved, then this is relatively easy to understand. But if the derivatives are very hard to understand, then this question can be very hard to determine.

There are examples of infinitely differentiable functions whose Taylor series don't converge in any neighborhood of the central expansion point, and there are examples of functions with pretty hard-to-understand derivatives whose Taylor series converge everywhere to that function. Asking for more is a bit nuanced for each individual function.

8
On

The term for what you're describing is "analytic". All of these are sufficient conditions for being analytic:

  1. Polynomial

  2. Exponential (note that one can generate trigonometric functions from exponential functions)

  3. Sum of analytic functions

  4. Product of analytic functions

  5. Derivative of analytic function

  6. Composition of analytic functions

  7. Reciprocal of an analytic function nowhere equal to zero

  8. Inverse of an analytic function with derivative nowhere equal to zero

  9. Integral of an analytic function (I think)

I'm not sure whether every analytic function can be built up using these rules, but most analytic functions you deal with probably can be.

0
On

I think the answer you are looking for is a major theorem in complex analysis:

Taylor series provide perfect approximations for holomorphic functions.

Holomorphic functions are functions that are complex-differentiable, i.e., functions $f: \mathbb{C} \to \mathbb{C}$ for which the complex derivative

\begin{align*} f'(z_{0})=\lim _{z\to z_{0}}{f(z)-f(z_{0}) \over z-z_{0}} \end{align*}

is well-defined (i.e. the limit exists and is unique) at all $z_0 \in \mathbb{C}$ in their domains.

This condition is much stronger than being differentiable in the reals. In fact, not only does it imply infinite-differentiability, but it has also been shown to be precisely the condition required for $f$ to be analytic (i.e. that the Taylor series converges to the function everywhere).