So I'm reading Simulation Based Optimization: Parametric Optimization Techniques and Reinforcement Learning by Abhijit Gosavi. On page 312 there's a definition as follows:
Definition 9.11. An equilibrium point $x_{\star}$ is said to be a stable equilibrium point of the ODE $\frac{d\vec{x}}{dt}=f(\vec{x},t)$, if and only if for all values of $\epsilon>0$, there exists a scalar $r(\epsilon)>0$, where $r(\epsilon)$ may possibly depend on $\epsilon$, such that if $\vec\phi(0)\in B(\vec x_{\star}, r(\epsilon))$, where $B(.)$ is a ball of radius $r(\epsilon)$ centered on $\vec x_{\star}$, and $\vec\phi(t)$ is the solution of the ODE concerned, then for all $t$, $\vec\phi(t)\in B(\vec x_{\star},\epsilon)$.
So, my question is, if there are more than one solutions, do we have to guarantee that all the solutions of such an ODE satisfy $\vec\phi(0)\in B(\vec x_{\star}, r(\epsilon))$? Or we just need to show there exists one solution that fits?
Thank you very much!
It is probably assumed that the function is continuous and piecewise smooth (in a sufficiently finite way), so that it is locally Lipschitz. Then all solutions are automatically unique. Note that for a stationary point you need $f(\vec x_*,t)=0$ for all $t$.
But yes, any solution has to stay bounded in that way. Usually this gets reduced to a property of the vector field $f$ anyway, so that questions of uniqueness in solutions go to the background.