Suppose $f(x)$ is real-analytic on an interval $I$ containing some points $a,b,a+b$. Then we have $$ f(a+b) = \sum_{k=0}^{\infty}\frac{f^{(k)}(a)}{k!}b^k\qquad (= T(f;a+b)_a) $$Reversing the roles of $a$ and $b$, we also have $$ f(a+b) = \sum_{k=0}^{\infty}\frac{f^{(k)}(b)}{k!}a^k\qquad (= T(f;a+b)_b) $$There are interesting things to say about the cases $a=0$, $a=-b$, etc. My question: suppose $0<b<a$. Is there a function $f(x)$ such that $T(f;a+b)_a$ converges slower than $T(f;a+b)_b$? Numerical experimentation with some common choices seems to suggest not; things like $f(x)=\sin(2x),$ $a=4/5$, $b=77/100$ worked for the first $30$-or-so even cases, but not afterwards. Likewise, I don't think it works for $e^x$, $1/(1-x)$ as long as $a,b,a+b\in(-1,1)$, or several other standard choices.
Rates of convergence of two related Taylor series
36 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Here's another class of functions for which the claim is true.
Let $I$ be an interval containing $a, b, c$ satisfying $b < a < c$. If $f$ is a real analytic function on $I$ such that
- The Taylor series of $f$ centered at $a$ (and $b$) converges at the point $c$
- Every derivative $f^{(n)}$ is monotonic on $I$
then the Taylor series based at $b$ converges slower than the Taylor series based at $a$.
This class in particular contains $e^x$ and $\frac{1}{1-x}$ and many of the standard functions. (For example, this also includes any of the trig functions on the interval $(0, \pi/2)$.)
In this case the proof is a bit simpler: there is an explicit formula for the remainder term $R_n(x;a)$ for the Taylor polynomial of degree $n$ based at $a$, namely
$$ R_n(x;a) = \frac{1}{n!} \int_a^x (x-t)^n f^{(n+1)}(t) ~dt $$
Under the assumptions, then we have that
$$ R_n(c;b) - R_n(c;a) = D_n(b;a) = \frac{1}{n!} \int_b^a (c-t)^n f^{(n+1)}(t) ~dt $$
and all three of $R_n(c;b)$, $R_n(c;a)$, and $D_n(b;a)$ have the same sign by the monotonicity assumption.
The question you asked can be formulated in the following way:
The answer is yes when the Taylor series has a finite radius of convergence at $a$. Denote this radius of convergence by $R$, the following is a sketch of the proof:
By a bit of complex analysis, we have that the radius of convergence of the Taylor series centered at $b$ is at most $R + a-b$.
These two facts mean (i) that the Taylor series at $a$ converges like the geometric series with ratio $\frac{c-a}{R}$ and (ii) the Taylor series at $b$ converges like a geometric series with ratio at least $\frac{c - b}{R + a - b} > \frac{c-a}{R}$. This tells you (with a bit of standard epsilon-delta argument) that