So I see the argument presented in taylor series, that
$$\sum c_n (x-a)^n = \sum \frac{f^{(n)}(a)}{n!} (x-a)^n$$
or $c_n = f^{(n)}(a)/n!$ if $x=a$
the question is, since the above only holds when $x=a$, how can you use the Taylor series for variable values of $x$? Basically the theorem states
$$f(x) = \sum c_n(x-a)^n$$
the coefficients are given by $c_n = f^{(n)}(a)/n!$
Yet the proof only works in the instance that $x=a$.
Also:
how can $$e^x = \sum \frac{x^n}{n!} = \sum \frac{e^2}{n!} (x-2)^n$$ (one's for $a = 0$ and the other $a = 2$)
The Taylor Series states the following: One can define a function $f(x)$ as an infinite series to approximate the function with a margin of error defined and bounded in Taylor's Theorem:
This is the definition of the Taylor Series approximation centered at $x_0$. Notice: The approximation is centered at $x_0$, meaning that the margin of error is minimal near to $x_0$ and $0$ at $x_0$, where the function and the Taylor Series approximation have the same $n^{th}$ derivative/term in sequence or value. As you tend outward the function of error increases, however is bounded (similar to a monotonic sequence, which implies convergence) by the following formula easily deduced by integration and bounding: Lagrange Error Bound for $P_{n}(x)$
We know that the nth Taylor polynomial is $P_{n}(x)$, and we have spent a lot of time in this chapter calculating Taylor polynomials and Taylor Series. The question is, for a specific value of $x$, how badly does a Taylor polynomial represent its function? We define the error of the nth Taylor polynomial to be
$E_{n}(x)=f(x)-P_{n}(x)$.
That is, error is the actual value minus the Taylor polynomial's value. Of course, this could be positive or negative. So, we force it to be positive by taking an absolute value.
$|E_{n}(x)|=|f(x)-P_{n}(x)|$.
The following theorem tells us how to bound this error. That is, it tells us how closely the Taylor polynomial approximates the function. Essentially, the difference between the Taylor polynomial and the original function is at most $|E_{n}(x)|$. At first, this formula may seem confusing. I'll give the formula, then explain it formally, then do some examples. You may want to simply skip to the examples.
Theorem 10.1 Lagrange Error Bound Let f be a function such that it and all of its derivatives are continuous. If $P_{n}(x)$ is the nth Taylor polynomial for f(x) centered at x=a, then the error is bounded by
$|E_{n}(x)|\le\frac{M}{(n+1)!}|x-a|^{{n+1}}$
where $M$ is some value satisfying $|f^{{(n+1)}}(x)|\le M$ on the interval between a and $x$.
Therefore, the error is bounded and monotonic, therefore my the Monotone Convergence Theorem, the "sequence of error" converges, such that the series converges or approximates to the function and does not diverge. As you tend to infinity, the difference between the series representation and the sum will tend to $0$.
In conclusion,
The theorem works because you are approximating a function with a convergent bounded series, that has a radius of convergence derived from the ratio test:
Analogous to a function where you input values, you can input any $x$ into the series, as long as it is less than a certain distance from a (radius of convergence). That is why you can input any valid value into this series.
I hope this was clear enough. I attempted to avoid mathematical notation to keep the explanation clear.