proof of Taylor's theorem

3.6k Views Asked by At

enter image description here

enter image description here

I am struggling to understand this proof. At the near last part, I don't understand how the author derive this equation $g^{(n+1)}(s)=f^{(n+1)}(s)-(n+1)!M_{x,x_0}$. I think it should be $g^{(n+1)}(s)=f^{(n+1)}(s)-(P^{x_0}_n)^{(n+1)}(s)-M_{x,x_0}(s-x_0)^{(n+1)}$.

Could you help me understand this part?

Thank you in advance!

2

There are 2 best solutions below

2
On BEST ANSWER

The issue you are facing is not difficult to handle. The $(n+1)$'th derivative of $g$ can be evaluated term by term. The first term in expression for $g(s) $ is $f(s) $ and its $(n+1)$'th derivative is $f^{(n+1)}(s)$. The second term is an $n$'th degree polynomial in $s$ so it's $(n+1)$'th derivative vanishes. The last term is $M_{x, x_{0}}(s-x_{0})^{n+1}$ and its $(n+1)$'th derivative is $$M_{x, x_{0}}(n+1)n(n-1)\dots 3\cdot 2\cdot 1(s-x_{0})^{(n+1)-(n+1)}=(n+1)!M_{x,x_{0}}$$ as given in your book.

Remember that derivatives are being computed with respect to $s$ and other letters are constants.

0
On

Just a historical remark. There are actually two versions of Taylor's theorem, relying on slightly different regularity assumptions for $f$. The assumption for the "hard" version is "$f$ is $n$ times differentiable in a neighbourhood of the origin" and the assumption for the "easy" version is "$f^{(n)}(x)$ is continuous in a neighbourhood of the origin".

Proving the "hard" version requires some form of De l'Hopital's rule while proving the "easy" version is straightforward by integration by parts, as we will see later.

Now it is well-known that not every derivative is continuous, but by the Darboux theorem each derivative has the mean value property, which is just a bit less than being continuous. And if we plan to integrate Maclaurin series, such subtle difference between continuous and discontinuous derivatives can be simply ignored (by density arguments). This is the reason I usually teach just the "easy" version. As a clutter-removal practice, we may just assume $x_0=0$ without loss of generality, up to re-defining $f(x)$ as $f(x-x_0)$.

If $f^{(n)}(x)$ is continuous on $[0,a]$ it is also Riemann-integrable over there, together with $\frac{(a-x)^{n-1}}{(n-1)!}f^{(n)}(x)\,dx$. By integration by parts $$ \int_{0}^{a}\frac{(a-x)^{n-1}}{(n-1)!}f^{(n)}(x)\,dx =\left[\frac{(a-x)^{n-1}}{(n-1)!}f^{(n-1)}(x)\right]_{0}^{a}+\int_{0}^{a}\frac{(a-x)^{n-2}}{(n-2)!}f^{(n-1)}(x)\,dx$$ and by induction $$f(a)-f(0) = \underbrace{\sum_{k=1}^{n-1}\frac{f^{(k)}(0)}{k!}\,a^k}_{\text{Taylor's polynomial}}+\underbrace{\int_{0}^{a}\frac{f^{(n)}(t)}{(n-1)!}(a-t)^{n-1}\,dt}_{\text{integral remainder}}. $$ Done. By applying the mean value theorem for integrals to the remainder we recover the weaker, alternative forms of it (Lagrange, Cauchy, Peano). This approach is IMHO best suited for dealing with multivariate Taylor series and it is also a key step in proving that $$ \lim_{n\to +\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \frac{1}{2}$$ without resorting to the strong law of large numbers or the central limit theorem.

A bit of handwaving turns the proof of the "easy" version into the proof of the "intermediate version", requiring that $f^{(n)}(x)$ exists over some interval and it is bounded. Indeed in the proof above we have just exploited $\text{continuous}\Rightarrow\text{Riemann-integrable}$, but the machinery continues to work if we replace the Riemann integral with the Lebesgue integral and we are sure about the integrability of $f^{(n)}(x)$, which on its turn is granted by the $\text{Darboux+boundedness}$ properties.