I've been reading Taylor's original paper, https://books.google.com/books?id=r-Gq9YyZYXYC&pg=PP3#v=onepage&q&f=false (There are also translations, http://17centurymaths.com/contents/taylorscontents.html )
Taylor was working with the simple notion of a derivative as a slope. His work was written at the time of finite difference representations of polynomials/curves. eg: Charles Babbage and his famous finite difference computing machine are mathematical peers with Taylor.
My historical reasearch indicates that precise and modern notions of limits weren't part of Taylor's definition for his series; but at some point in history the modern definitions modified Taylors idea.
Notation has changed over the centuries for differences and derivatives, but if I take a list of finite difference coefficients [2,1,0], I can demonstrate the calculation of x^2 in the rightmost element of my list. Repetitive addition of finite differences corresponds to integration in calculus and I get the following:
[2,3,1], [2,5,4], [2,7,9], [2,9,16] ...
But a first difference has slightly different properties than a first derivative in modern calculus.
A limit used for taking the slope of a polynomial (or power term) generally arrives at the same value whether taken from the left or right side.
The same is not true for finite differences.
The first finite difference of x^2 at x=3 is either 5 or 7, depending on whether you take the difference from the left or right.
However, at some point in history, the idea that Taylor series requires both the left and right derivatives to exist and be the same, became popular.
The idea clearly can't be attributed to Taylor!
Who was the first person to insist that a Taylor series have both infinite left and right sided derivatives ?
What justification did they give for insisting on this requirement and attributing it to Taylor ?
(Note: This requirement is not listed in Collins mathematical dictionary, and several other mathematical resources as a necessary properly of Taylor series. The dictionary is edited by a board of mathematicians, so the lacuna may be intentional. )
There are certainly proofs that came into existence AFTER Taylor invented his method (which produces a series) which may benefit from uniform convergence and other properties, but which are not guaranteed by Taylor's original algorithm alone.
I don't see why these extra developments don't bear the name of the mathematicians who invented them.
However,
I see no reason (for example) that the Taylor series for an arbitrary function, (eg: x^2) that is smooth, point-wise convergent, and having constant coefficients --- wouldn't be applied by Taylor to a truncated version of the function regardless of whether or not the function's left or right derivative is wrecked (so long as it's only one side).
Demonstration: If x^2 was restricted in a particular problem to x>0, alone -- the left handed limit becomes undefined. But the Taylor series for x^2 was defined before I placed a restriction on the domain of x^2 .. The series still has all the well known properties of a Taylor series.
The properties of smoothness of the series would still exist, the power series representation with constant coefficient still exists. The idea of convergence at all points is simply not required for a given power series to be a Taylor series -- and so the series doesn't converge on the left side of my truncated function; but I don't see how that's relevant to whether or not I have a Taylor series.
So, who added the requirement that a Taylor series (to exist) must have infinite left and right sided derivatives?
Is this just a curriculum vitae issue from different mathematical schools over the centuries, or has a formal ( and provably definitive reason ) been given for the requirement? eg: Why do many schools insist that infinitely differentiatable (two sided) is important in the definition of Taylor Series?