On pg. 110 of Rudin's Principles of Mathematical Analysis, it is shown that if $f$ is a real function on $[a, b]$ with $f^{(n)}(t)$ existing for every $t \in (a,b)$, then there exists some $x \in (a, b)$ such that
$$ f(b) = \sum_{k=0}^{n-1} \frac{f^{(k)}(b)}{k!} (b - a)^k + \frac{f^{(n)}(x)}{n!} (b - a)^n $$
Now on wikipedia, Taylor's Theorem is stated at least once as the following:
$$ f(b) = f(a) + f'(a)(b-a) + \ldots + h_n(b)(b-a)^n $$
with
$$ \lim_{b \to a} h_n(b) = 0 $$
Question: Combining these two proofs/statements, it seems that
$$ \lim_{b \to a} h_n(b) = \lim_{b \to a} \frac{f^{(n)}(x_b)}{n!} = 0 $$
where $x_b$ is chosen for each $b$ as it converges to $a$.
But why? That is, how do we know that the error term that Rudin derives converges to zero, not as $n$ goes to infinity, but as $x \rightarrow a$?
We don't know that the error term goes to zero as $x\to a$. There are functions which are infinitely differentiable on $(0,1)$ but are not analytic at $0$. For example, if you plug in $$f(x)=\sin\left(\frac1x\right)$$ into your statements, takeing $f(0) = 0$, you will find that Rubin's statement, with $a = 0$ and arbitrary $n$ and $b$, holds, that the Taylor series converges with $n$ for all $x$ in $(0,1)$, but that the error term does not converge to zero as $b$ approaches zero.