This isn't a question about Taylor series.
A little background: I'm a robotics engineering student looking at polynomial trajectories, specifically using polynomials to define the rate at which the robot follows some path (the advantage of this is setting initial and final velocity/acceleration/jerk/etc to zero, while giving us our desired initial and final positions). Put another way, we're doing interpolation.
If we use a linear "polynomial", we can set our initial and final velocity. If we use a cubic, we can set initial velocity and final velocity. If we use a quintic, we can set initial and final velocity and acceleration. You can see that as we increase the order, we can increase the number of derivative values we set. I'm specifically interested in a special case: all derivatives set to 0, initial time set to 0, final time set to 1. I'm only interested in the [0, 1] domain.
I noticed while I was playing with solving higher order polynomials that the trajectory seems to converge on some function. So, my question is simple; does it? And if it does converge, what does it converge to?
I think the trajectory doesn't converge, but I don't know how to prove it. If it does converge, I'm very interested to find out what it converges to.
The usefulness of this "infinitely" "optimized" trajectory is not clear. However, proving that it can't be done does allow me (and anyone else thinking about this) to stop wasting time thinking about this.
I've written a Python script to visualize what's happening with linear algebra, although it breaks down around order 25. It's on this repl. You need to close Matplotlib to generate the next graph, and it'll keep going up to order 29 by default. You might have to click a button to reveal the code.
By applying an invertible affine transformation, if necessary, we may assume the polynomial trajectory $\mathbf{P}(t)$ satisfies $\mathbf{P}(0)=\mathbf{0}$ and $\mathbf{P}(1)=\mathbf{e}_1$ (and all the higher derivatives vanish at the initial/final position). Then all but the first coordinates vanishes identically, hence it suffices to consider the 1D case.
Focusing on the 1D case, Then the problem reduces to identifying the unique $(2k-1)$-degree polynomial $P_k(t)$, $k = 1, 2, \ldots$, satisfying
\begin{gather*} P_k(0)=0,\qquad P_k(1)=1, \qquad \text{and} \\ P^{(i)}(t)=0 \qquad \text{for all $t\in\{0,1\}$ and $i\in\{1,\ldots,k-1\}$}. \end{gather*}
It turns out that the solution is given by
$$ P_k(t) = \frac{1}{B(k,k)} \int_{0}^{t} (1-s)^{k-1} s^{k-1} \, \mathrm{d}s, $$
which is the CDF of $\text{Beta}(k, k$) distribution. Using this we find that
$$ \lim_{k\to\infty} P_k(t) = \begin{cases} 1, & t > \frac{1}{2}, \\ \frac{1}{2}, & t = \frac{1}{2}, \\ 0, & t < \frac{1}{2}. \end{cases} $$