Why doesn't the Taylor expansion at 0 around $ e^{-1/x^2} $ converge to the function itself?

478 Views Asked by At

I know that if you take the Taylor expansion at $x = 0$ of $e^{-1/x^2}$, you get $f(x) = 0$. However, I was wondering if a rigorous proof could be shown of why the Taylor expansion and the actual function differ at every point (other than 0) in this case. I was thinking of using the Lagrange Remainder: $$ R_n(x) = \frac{f^{(n)}(z)x^n}{n!} $$ where $z \in (0, x)$ for any $x$. I tried to show why $\lim_{n \rightarrow \infty} R_n(x) \neq 0$ but I was running into trouble. Specifically, it seems like the $n!$ term grows faster than the top, alongside the fact it was quite hard to characterize the value of $f^{(n)}(z)$. Any help would be appreciated.

2

There are 2 best solutions below

1
On BEST ANSWER

Since the Maclaurin polynomial is zero for each integer $n$, the remainder term is just $f$ itself, and since this does not vanish for $x\neq 0,\ f$ is not represented by a Taylor series ar $x=0.$

5
On

The Taylor expansion does converge - to the function that's everywhere $0$. For every $n$ the remainder term at $x$ (the error in the Taylor expansion) is $\exp(-1/x^2)$, the value of the function you started with.