I am trying to understand the intuition behind Taylor/Maclaurin series.
You have some differentiable function $f(x)$ and you want to make a series $g(x)$ where $f^{n}(x) = g^{n}(x)$, i.e. the $n$th derivatives of each give you the same output for some input.
Assuming we have this matching derivative output concept in place, how do we know this necessarily means $f(x)$ and $g(x)$ are equivalent representations of each other?
Normally these approximations are made in the neighborhood of $x=0$ (and yes we could use $x=a$ but for simplicity I'd like to stick with $0$), so it makes sense that $f(x)$ and $g(x)$ are equal for any $n$th derivative you want to compute at $x=0$ since that is how we derived $g(x)$ in the first place.
But what exactly lets us then take $g(x)$ and say "This will also work for any other $x$, not just $0$, since it is an equivalent to $f(x)$"?
In other words I don't see why it is obvious that through the method of creating the Taylor/Maclaurin series $g(x)$ we must necessarily have an equivalent for $f(x)$.
The property of functions that make this possible is called "analyticity". In general, it is not true that a function is determined by its value and derivatives at a point. A famous example is the function $f(x)=e^{-1/x^2}$ when $x\neq 0$ and $f(0)=0$. It can be shown that all the derivatives of this function vanish at $x=0$, but obviously the function is not identically zero. So it is not "analytic". Analytic functions, like $e^x$, for example, can indeed be represented by an infinite power series whose coefficients depend only on the values of the function and its derivatives at a single point.