Coming from a statistics background, I'll provide an example related to fitting a model to an analysis dataset. Let's suppose I suspect the relationship between the mean value of the outcome variable ($E(y)$) and a number of predictor variables $x_1, x_2, ..., x_p$ can be described with a nonlinear, differentiable equation. Let's say I fit a multivariate regression to my data in the form of a $k^{th}$ order polynomial function: $E(y)=p(x_1,x_2,...,x_p)=\beta_0+\beta_{11}x_1+\beta_{21}x_1^2+...+\beta_{k1}x_1^k+\beta_{12}x_2+\beta_{22}x_2^2+...+\beta_{k2}x_2^k+...$, where the $\beta$s are regression coefficients (constants), and achieve a reasonably good fit.
While the formula $E(y)=p(x_1,x_2,...,x_p)$ is useful for making predictions, I'd like to shed light on the underlying function in order to better explain the form of the relationship between the outcome and predictor variables in the data.
Question: Given sufficient terms, is it possible to determine the unique function $f(x_1,x_2,...,x_p)$ that is represented by the Taylor polynomial $p(x_1,x_2,...,x_p)$? More generally, can the process of finding a Taylor series for a function be "reversed" to retrieve the original function?
A finite Taylor polynomial certainly cannot determine the function uniquely. For instance, $f(x)=1+x+\frac{x^2}{2}$ and $g(x)=e^x$ have the same second-order Taylor polynomial at $c=0$.
As it turns out, there are functions which cannot be recovered from their Taylor series. For instance, define $f(x)=e^{-\frac{1}{x}}$ if $x>0$ and $f(x)=$ for $x\leq 0$. Then $f^{(n)}(0)=0$ for all $n$, but $f$ is not identically zero.