When doing A-levels, we showed that any $n$th degree polynomial function $p(x-a)$ can be expressed in terms of its $n$ derivatives
$$p(x)=\sum_{i=0}^n \frac{p^{(i)}(a)}{i!}(x-a)^n$$
and this was the Taylor's theorem that we learned. Now at university, in our Analysis II course, we did the following theorem, also dubbed Taylor's theorem:
If $f:[a,b]\to\mathbb R$ is a continuous function that is differentiable on $(a,b)$, and $x,x+h\in(a,b)$, then $f(x+h)-f(x)=f'(x+\theta h)h$ for some $\theta\in(0,1)$.
Which we proceeded to prove using the mean value theorem. This seems to me to be quite an unrelated theorem - does it have anything to do with the polynomial expansion in terms of derivatives?
There is indeed a connection. Let's rewrite your second Taylor's theorem (which is really just a restatement of the Mean Value Theorem) a little. Put $x+h = a$, then
$$ f(x) = f(a) + f^\prime(c)(x - a) $$
where $c = x + \theta h$. This looks remarkably like the linear (first order) Taylor expansion of a function $f(x)$:
$$ f(x) = f(a) + f^\prime(a)(x - a) + R_1(x) $$
where $R_1(x)$ is the remainder of the linear approximation.
So what this tells us is that the exact difference between $f(x)$ and $f(a)$ can be given as something which closely resembles the first-order Taylor expansion. You can repeat this idea up to higher orders, and by the time you generalise it to the $n$th order you'll get back Taylor's Theorem.