Explanation to "recurrence relation" of Taylor's series (how did we come up with coefficients and where are factorials?)

67 Views Asked by At

My teacher wrote this equations, but unfortunately I can't ask him why he did it.

$f(x) = \frac{1 + x + x^2}{1 - x - x^2}$, $x_0 = 0$

$P(x) = \frac{f^{(0)}(x_0) \cdot (x - x_0)^0}{0!} +\frac{f^{(1)}(x_0) \cdot (x - x_0)^1}{1!} + \frac{f^{(2)}(x_0) \cdot (x - x_0)^2}{2!} + \cdots + \frac{f^{(n)}(x_0) \cdot (x - x_0)^n}{n!} = a_0(x - x_0)^0 + a_1(x - x_0)^1 + \cdots + a_n(x - x_0)^n$

So my teacher has found all necessary ($a_i$) coefficients using some "recurrence relation" (I don't know how to name it properly)
$x^0: f_{0}(0) = 1$
$x^1: f_{1}(x) = \frac{f_{0}(x) - f_{0}(0)}{x} = \frac{\frac{1 + x + x^2}{1 - x - x^2} - 1 }{ x } = \frac{2}{1 - x + x^2} \to 2, x \to 0$
$x^2: f_{2}(x) = \frac{f_1 (x) - f_1 (0) }{ x} = \frac{2 - 2x}{1 - x + x^2} \to 2, x \to 0$
$x^3: f_{3}(x) = \frac{f_2 (x) - f_2 (0) }{x} = \frac{-2x}{1 - x + x^2} \to 0, x \to 0$
$x^4: f_{4}(x) = \frac{f_3 (x) - f_3 (0) }{x} = \frac{-2}{1 - x + x^2} \to -2, x \to 0$

Those functions ($f_1, f_2, f_3, f_4$) are clearly not derivatives. I don't see any factorials in denominators here. I've tried to find some connection between derivatives and those equations, but...
For example, $f^{(1)}(x) = \frac{2 - 2x^2}{(1 - x + x^2)^2}$, while $f_1(x) = \frac{2}{1 - x + x^2}$
$f^{(2)}(x) = \frac{4(x^3-3x+1)}{(1 - x + x^2)^3}$, while $f_2(x) = \frac{2 - 2x}{1 - x + x^2}$
Ehm, what should I do with that? Divide derivative by corresponding function? Different results.
No obvious patterns to me, just very small hints. So questions:

  1. What are really (in nature) those $f_1, f_2, f_3, f_4$ and why, if we plug in $x_0$ instead of $x$, it gives us correct coefficients?
  2. Where are factorials in those formulas?
  3. What kind of dark magic it is?
1

There are 1 best solutions below

0
On BEST ANSWER

This is a different way of finding the coefficients for a polynomial expansion of an analytic function. For now, forget about the Taylor expansion, derivatives, and factorials.

If

$$ f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots, $$

then to find $a_0$ we just take $a_0 = \lim_{x \to 0} f(x)$. (In this example, all the functions involved are continuous at $0$, so the limits aren't that important.) To start out the pattern, call $f_0 = f$, and instead write

$$ a_0 = \lim_{x \to 0} f_0(x). $$

If we know $a_0$, then we can in a couple of steps get to a similar polynomial series with coefficients "shifted down":

$$ f_0(x) - a_0 = a_1 x + a_2 x^2 + a_3 x^3 + \cdots $$

$$ f_1(x) = \frac{f_0(x) - a_0}{x} = a_1 + a_2 x + a_3 x^2 + a_4 x^3 + \cdots $$

Then just like before,

$$a_1 = \lim_{x \to 0} f_1(x) $$

And it continues like your professor did...

$$ f_1(x) - a_1 = a_2 x + a_3 x^2 + a^4 x^3 + \cdots $$ $$ f_2(x) = \frac{f_1(x) - a_1}{x} = a_2 + a_3 x + a_4 x^2 + \cdots $$ $$ a_2 = \lim_{x \to 0} f_2(x) $$

If we wanted the expansion around a different point $x_0$, we would just have $(x-x_0)^k$ terms and $\lim_{x \to x_0}$ limits.

Now given a function $f$ and point $x_0$, only one infinite polynomial series can converge to that function on an interval containing $x_0$, so the Taylor method must give the same coefficients

$$ \lim_{x \to x_0} f_n(x) = a_n = \frac{f^{(n)}(x_0)}{n!} $$

although there's not a simple and obvious relationship between the functions $f_n$ and $f^{(n)}$.