Summary
This is a question regarding the maximum error that sampling of a dynamical system trajectory introduces w.r.t. the chosen time-step $\delta$. I formulate this as an optimization problem of the perpendicular distance between the trajectory and its projection onto its linear interpolation. I want to show that the maximum of this error is invariant of the initial state, and a tight bound can be derived. I am aware of other bounds which are loose, derived via the Taylor expansion.
Problem statement
Given is an autonomous dynamical system $\frac{d}{dt}\boldsymbol{x}(t)=\boldsymbol{A}\boldsymbol{x}(t)$, $\boldsymbol{x}\in\mathbb{R}^n$ and $\boldsymbol{A}\in\mathbb{R}^{n\times n}$, with a trajectory $\boldsymbol{x}(t) = e^{\boldsymbol{A}t}\boldsymbol{x}_0$, where $\boldsymbol{x}_0$ is an initial state. Now define the shifted trajectory function as: $$\boldsymbol{v}(t,\boldsymbol{x}) = (e^{\boldsymbol{A}t} - \boldsymbol{I})\boldsymbol{x} = \Psi(t)\boldsymbol{x}.$$
Given a time step $\delta\in\mathbb{R}_+$, and let $\boldsymbol{v}_\delta(\boldsymbol{x}) \triangleq \boldsymbol{v}(\delta,\boldsymbol{x}) = \Psi_\delta\boldsymbol{x}$ be its linear interpolation, then the projection of $\boldsymbol{v}$ onto $\boldsymbol{v}_\delta$ is: $$\bar{\boldsymbol{v}}(t,\boldsymbol{x}) = \frac{\boldsymbol{v}_\delta\boldsymbol{v}_\delta^T}{\lVert \boldsymbol{v}_\delta \rVert^2}(\boldsymbol{x})\boldsymbol{v}(t,\boldsymbol{x}).$$
Now lets define the distance: $$ d(t,\boldsymbol{x}) = \lVert\boldsymbol{v}(t,\boldsymbol{x})\rVert^2 - \lVert\bar{\boldsymbol{v}}(t,\boldsymbol{x})\rVert^2 =\\= \lVert\Psi(t)\boldsymbol{x}\rVert^2 - \frac{\lVert \Psi_\delta\boldsymbol{x}\boldsymbol{x}^T\Psi_\delta^T\Psi(t)\boldsymbol{x} \rVert^2} {\lVert \Psi_\delta\boldsymbol{x} \rVert^4} =\\= \lVert\Psi(t)\boldsymbol{x}\rVert^2 - \frac{( \boldsymbol{x}^T\Psi_\delta^T\Psi(t)\boldsymbol{x})^2} {\lVert \Psi_\delta\boldsymbol{x} \rVert^2}.$$
See this figure for an example.
What I wish to find is a global maximizer $\tau\in[0,\delta]$ of $d$ for any $\boldsymbol{x}\in\mathbb{R}^n\setminus\{\boldsymbol{0}\}$.
Question
Given that the gradient of $d$ w.r.t. $t$ is: $$\nabla_td(t,\boldsymbol{x}) = 2\left(\boldsymbol{x}^T\Psi^T(t)\boldsymbol{A}e^{\boldsymbol{A}t}\boldsymbol{x} - \frac{\boldsymbol{x}^T\Psi^T(t)\Psi_\delta\boldsymbol{x}\boldsymbol{x}^T\Psi_\delta^T\boldsymbol{A}e^{\boldsymbol{A}t}\boldsymbol{x}}{\lVert\Psi_\delta\boldsymbol{x}\rVert^2}\right),$$ I want to show that if $\tau\in(0,\delta)$ is indeed a extremizer, then the extremum is invariant of $\boldsymbol{x}$, formally: $$\exists \tau\in(0,\delta):\forall \boldsymbol{x}\in\mathbb{R}^n\setminus\{\boldsymbol{0}\}:\nabla_td(\tau,\boldsymbol{x})=0$$
My question is how can I show this, given that I know the gradient is 0 for some $\boldsymbol{x}$ and $\tau$.
My attempt
First note that $d\in\mathcal{C}^\infty$ and thus by the extreme value theorem an extremum at $\tau\in[0,\delta]$ is attained for some arbitrary $\boldsymbol{x}\in\mathbb{R}^n\setminus\{\boldsymbol{0}\}$. Let $\Phi_\tau \triangleq e^{\boldsymbol{A}\tau}$ and $\Psi_\tau \triangleq \Psi(\tau)$, then: $$ \nabla_t d(\tau,\boldsymbol{x}) = 2\left(\boldsymbol{x}^T\Psi^T_\tau\boldsymbol{A}\Phi_\tau\boldsymbol{x} - \frac{\boldsymbol{x}^T\Psi^T_\tau\Psi_\delta\boldsymbol{x}\boldsymbol{x}^T\Psi_\delta^T\boldsymbol{A}\Phi_\tau\boldsymbol{x}}{\lVert\Psi_\delta\boldsymbol{x}\rVert^2}\right) = 0.$$
I arranged the above equality into $$\frac{\boldsymbol{x}^T\Psi^T_\tau\boldsymbol{A}\Phi_\tau\boldsymbol{x}}{\boldsymbol{x}^T\Psi_\delta^T\boldsymbol{A}\Phi_\tau\boldsymbol{x}} = \frac{\boldsymbol{x}^T\Psi^T_\tau\Psi_\delta\boldsymbol{x}}{\boldsymbol{x}^T\Psi_\delta^T\Psi_\delta\boldsymbol{x}},$$ where I am stuck with a bunch of quadratic forms, and I don't know how to get rid of the $\boldsymbol{x}$. I suspect it is something trivial that I am overlooking. Simulations do show however that my claim is correct.
This is how far I have gotten with the questions.
Useful fact
Let $\boldsymbol{u}$ be an eigenvector of $\boldsymbol{A}$, then $$ \forall t\in[0,\delta]:d(t,\boldsymbol{u}) = 0. $$ This is easily shown via substitution.
If you have any suggestions or useful insights on how to prove the claims above, I would greatly appreciate it. Thanks!
EDIT: Previously I had an extra question, but I decided to remove it, because it is a completely different problem on its own, and I will most likely post it separately.