What does $x-a$ do in Taylor series?

3.7k Views Asked by At

In Taylor series we have $x-a$ which usually means shifting the function $a$ positions to the right (if $a$ is negative then it's a left-shift, and if $a=0$ then there's no shift at all).

What I don't understand is why this is in here. The explanation I hear is "to approximate around a different point" but this makes no sense to me.

If I have a function $f(x)$ and I want to evaluate it at point $a$ then I just plug in $a$, that is, evaluate $f(a)$. I don't understand the purpose of shifting the function over, or what it's accomplishing that I couldn't do by just leaving $a=0$ in the first place, since the function itself does not change, we're just picking it up and moving it over a bit.

5

There are 5 best solutions below

7
On BEST ANSWER

A Taylor expansion of $f$ around $a$ allows to simply read off the the derivatives of arbitrary order at $a$. So its just a neat way to write the function which includes much information how this function behaves near that point.

This is useful for example when calculating limits of functions when $x\to a$ (l'Hospital rule).
As an example, if you have the polyomials $f(x)=2(x-1)+3(x-1)^2$ and $g(x)=x-1$, then the limit $$\lim_{x\to 1}\frac {f(x)}{g(x)}=\lim_{x\to 1} \frac{2(x-1)+3(x-1)^2}{x-1}=2, $$ is easy to see. However, if we start with $f(x)=3x^2-4x+1$, then calculating the limit is (still easy as the functions are easy, but) not trivial anymore.


Edit: Calculating the limits:

$$\lim_{x\to 1}\frac {f(x)}{g(x)}= \lim_{x \to 1}\frac{2(x-1)+3(x-1)^2}{x-1}= \lim_{x \to 1}\frac{2(x-1)}{x-1} + \lim_{x \to 1} \frac{3(x-1)^2}{x-1}\\ = \lim_{x \to 1} 2+ \lim_{x \to 1}3(x-1)= 2+0 = 2, $$ so the limit is jjust the coefficient in front of the linear term of the Taylor series of $f$.

If we don't already know the Taylor series, we have to calculate: $$ \lim_{x\to 1}\frac {f(x)}{g(x)} = \lim_{x\to 1}\frac {3x^2-4x+1}{x-1}.$$ To my knowledge, there is no way how we can simply read off the limit. So we have to use some tricks.
Note that $\lim_{x\to 1} (3x^2-4x+1) =0 = \lim_{x\to 1} (x-1)$, so we can use l'Hospital: $f'(x) = 6x-4$, so $f'(1)=2$; $g'(x)=1$, so $g'(1)=1$. Thus we get: $$\lim_{x\to 1}\frac {f(x)}{g(x)} = \lim_{x\to 1}\frac {f'(x)}{g'(x)}= \frac 2 1 =2.$$

As I said before, this calculation is not that hard because $f$ and $g$ where chosen easy; but I guess this still illustrates the point that the second calculation was for more cumbersome than the first one.

2
On

You are correct.

If our function f(x) is a polynomial, it really does not matter to find the Taylor polynomial around $x=0$ or any other point.

As you know Taylor polynomials approximate a function f(x) and use derivatives of f(x) to evaluate coefficients.

Some functions such as $$f(x) = \sqrt x$$ or $$f(x)=log (x)$$ are not differentiable at $x=0.$

Thus it is not possible to write the Taylor polynomials for these types of functions around $x=0.$

We use Taylor polynomials around another point such as $ x=1 $ where our derivatives exist.

0
On

A Taylor polynomial uses derivative data at a point, say $a \in \mathbb{R}$, to generate an approximation of a function. In this case "data" means derivatives of a function, say $f$. The term $x-a$ doesn't merely shift a polynomial from $0$ to $a$, but rather it indicates where the data is centered.

For instance, we write for a Taylor polynomial centered at zero:

$$T_n(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \cdots + \frac{f^{(n)}(0)}{n!} x^n.$$

The function $T_n(x)$ is designed so that its derivatives up to order $n$ match the function $f$ at $0$. You can verify this by calculating $T_n(0)$, $T'_n(0)$ etc. This polynomial depends very much on the center point. For instance, if we had instead:

$$T_n(x) = f(0) + f'(0) (x+1) + \frac{f''(0)}{2!} (x+1)^2 + \frac{f'''(0)}{3!} (x+1)^3 + \cdots + \frac{f^{(n)}(0)}{n!} (x+1)^n$$ and we exmine $T_n(0)$ we have $f(0) + f'(0) + \cdots + f^{(n)}(0) \neq f(0)$, so this wouldn't be a good estimate of $f$ at the origin.

Instead, what we would be looking for would be data at $a=-1$, and we would write:

$$T_n(x) = f(-1) + f'(-1) (x+1) + \frac{f''(-1)}{2!} (x+1)^2 + \frac{f'''(-1)}{3!} (x+1)^3 + \cdots + \frac{f^{(n)}(-1)}{n!} (x+1)^n$$

and here we have $T_n(-1) = f(-1)$, etc.

So the center point indicates what data we should use from $f$, and also where we have an exact match for the function estimation. How well a Taylor polynomial approximates a function away from that point varies depending on the function estimated.

One example where a Maclauren series (Taylor series at the origin) would not work would be in the estimation of the natural logarithm.

In particular, $\ln(x)$ is not defined at the origin and has an asymptote there. In this case, $a=1$ is used for Taylor expansions.


As demonstration, let's derive the Taylor series for $\ln(x)$. For now call it $f$.

$$f'(x) = \frac{1}{x} = \frac{1}{1-(1-x)} = \sum_{n=0}^\infty (1-x)^n = \sum_{n=0}^\infty (-1)^n (x-1)^n.$$

Note that this series expansion is valid provided that $|x-1| < 1$. Then $f(x) = C + \sum_{n=0}^\infty \frac{(-1)^n}{n+1} (x-1)^{n+1}.$ We now need to find $C$, and since we know that $f(1) = C$ and $\ln(1) = 0$ we have $C = 0$.

Thus, $$\ln(x) = \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n} (x-1)^{n},$$ and we have the Taylor expansion of $\ln(x)$ at $1$.

This comes with certain side benefits. For instance we know that a Taylor expansion is written as $$f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!} (x-a)^n.$$

Now say we want to know the $10000$-th derivative of $\ln(x)$ at $1$. We know that coefficient on the $10000$-th term is $f^{10000}(0)/(10000!)$ but also that for the natural logarithm in particular we have $\frac{(-1)^{9999}}{10000}$.

Equating we find: $\left.\frac{d^{10000}}{dx^{10000}} \ln(x) \right|_{x=1} = -9999!$

0
On

The Taylor series is sometimes an important way to reduce the degree of the polynomial equation, which is to be solved without using the long division. For example, if we assume that we have an equation of the third degree with a known of one of its roots$(x=a)$ and we seek to find the remaining two roots, we only have to turn the equation to the Taylor series at $(x=a)$ to get a new second degree equation .

0
On

Assume you have to compute a "complicated function", containing $\sqrt{\cdot}$s, $\exp$s, $\sin$s, etcetera, a million times in points $x$ near some interesting point $a$. Computing $f(x)$ each time exactly by plugging in $x$ would be time consuming and expensive; but maybe you are willing to put up with a good approximation. That's where the Taylor polynomial comes in: It is a certain polynomial $p(X)=c_0+c_1X+c_2X^2+\ldots+c_nX^n$ in the increment variable $X:=x-a$, with coefficients $$c_j={f^{(j)}(a)\over j!}\ ,$$ computed once and for all, that gives you an approximate value for $f(x)$ at any nearby point $x:=a+X$ (note that $|X|\ll1$ here) with a well controlled error in no time.