There is (//en.wikipedia.org/wiki/Taylor_series) a generalization of the Taylor series that converges to the value of the function $f(x)$ for any bounded continuous function on $(0,\infty)$ using the calculus of finite differences.
Specifically, according to a theorem by Einar Hille, in "Functional Analysis and Semi-Groups", AMS Colloquium Publications 31, *1957), p.300-327,
$$f(a+t) = \lim_{h\to 0^+} \sum_{n=0}^\infty \frac{t^n}{n!} \frac{\Delta_h^n f(a)}{h^n} $$ where $\Delta_h^0 f(a) = f(a)$ and for $n>1$, $$\Delta_h^n f(a) = \Delta_h^{n-1}f(a+h) - \Delta_h^{n-1}f(a)$$
This is closely related to the Newton series for $f(x)$.
I thought I would give this theorem a try on my favorite bad-boy function $e^{-1/x^2}$. In order to keep the domain strictly positive, as required in the statement of the theorem, I move the interesting point a bit to the right by looking at $$ f(x) = e^{-\frac1{(x-1)^2}}$$ It would be nice to have some sort of series expansion (about $x=1$) that works, rather than always giving zero as the naive Taylor series does.
So the first step is to find those finite differences, and that is reasonably easy: $$ \Delta_h^nf(x) = \sum_{k=0}^n (-1)^k\binom{n}{k}e^{-\frac1{(x-1+kh)^2}} $$
And now to expand about $x=1$, we have to plug that expression, with $x=1$, into the theorem and take the limit as $h\to 0$ for each value of $n$.
And here is what puzzles me -- for all values of $n$, that limit is zero. For example, for the $n=2$ term we get $$ \frac{t^2}{2!} \lim_{h\to 0^+}\frac{e^{-1/(4h^2)}-2e^{-1/h^2}}{2h^2}=\frac{t^2}{2!} \cdot 0 $$
What is happening here? Is it that I'm not allowed to switch the order of summation on $n$ with taking the limit as $h\to 0$? If so, this theorem is of little practical use...