Rates of convergence of two related Taylor series

36 Views Asked by At

Suppose $f(x)$ is real-analytic on an interval $I$ containing some points $a,b,a+b$. Then we have $$ f(a+b) = \sum_{k=0}^{\infty}\frac{f^{(k)}(a)}{k!}b^k\qquad (= T(f;a+b)_a) $$Reversing the roles of $a$ and $b$, we also have $$ f(a+b) = \sum_{k=0}^{\infty}\frac{f^{(k)}(b)}{k!}a^k\qquad (= T(f;a+b)_b) $$There are interesting things to say about the cases $a=0$, $a=-b$, etc. My question: suppose $0<b<a$. Is there a function $f(x)$ such that $T(f;a+b)_a$ converges slower than $T(f;a+b)_b$? Numerical experimentation with some common choices seems to suggest not; things like $f(x)=\sin(2x),$ $a=4/5$, $b=77/100$ worked for the first $30$-or-so even cases, but not afterwards. Likewise, I don't think it works for $e^x$, $1/(1-x)$ as long as $a,b,a+b\in(-1,1)$, or several other standard choices.

2

There are 2 best solutions below

2
On BEST ANSWER

The question you asked can be formulated in the following way:

Let $f$ be a real analytic function on an interval $I$ that contains three points $a, b, c$. Suppose $b < a < c$, and suppose that the Taylor series of $f$ at the points $b$ and $a$ both converge at $c$, then is it true that the Taylor series at $b$ cannot converge faster?

The answer is yes when the Taylor series has a finite radius of convergence at $a$. Denote this radius of convergence by $R$, the following is a sketch of the proof:

By a bit of complex analysis, we have that the radius of convergence of the Taylor series centered at $b$ is at most $R + a-b$.

These two facts mean (i) that the Taylor series at $a$ converges like the geometric series with ratio $\frac{c-a}{R}$ and (ii) the Taylor series at $b$ converges like a geometric series with ratio at least $\frac{c - b}{R + a - b} > \frac{c-a}{R}$. This tells you (with a bit of standard epsilon-delta argument) that

  • For any $\epsilon > 0$ there exists a constant $C_\epsilon$ such that the remainder term of approximating $f(c)$ by the Taylor polynomial centered at $a$ of degree $N$ is $\leq C_\epsilon |\frac{c-a}{R} + \epsilon |^N$.
  • For any $\epsilon > 0$ there exists a constant $D_\epsilon$ such that there is a increasing sequence $(N_i)_{i = \{0, 1, \ldots \}}$ such that the remainder term of approximating $f(c)$ by the Taylor polynomial centered at $a$ of degree $N_i$ is $\geq D_\epsilon | \frac{c - b}{R + a - b} - \epsilon |^{N_i}$.
  • And therefore for a subsequence of $(N_i)$, you have that the remainder of the polynomial centered at $b$ must be larger than the polynomial centered at $a$, and hence it is not possible that the series converges slower based at $a$ than based at $b$.
0
On

Here's another class of functions for which the claim is true.

Let $I$ be an interval containing $a, b, c$ satisfying $b < a < c$. If $f$ is a real analytic function on $I$ such that

  1. The Taylor series of $f$ centered at $a$ (and $b$) converges at the point $c$
  2. Every derivative $f^{(n)}$ is monotonic on $I$

then the Taylor series based at $b$ converges slower than the Taylor series based at $a$.

This class in particular contains $e^x$ and $\frac{1}{1-x}$ and many of the standard functions. (For example, this also includes any of the trig functions on the interval $(0, \pi/2)$.)

In this case the proof is a bit simpler: there is an explicit formula for the remainder term $R_n(x;a)$ for the Taylor polynomial of degree $n$ based at $a$, namely

$$ R_n(x;a) = \frac{1}{n!} \int_a^x (x-t)^n f^{(n+1)}(t) ~dt $$

Under the assumptions, then we have that

$$ R_n(c;b) - R_n(c;a) = D_n(b;a) = \frac{1}{n!} \int_b^a (c-t)^n f^{(n+1)}(t) ~dt $$

and all three of $R_n(c;b)$, $R_n(c;a)$, and $D_n(b;a)$ have the same sign by the monotonicity assumption.