Clarification on asymptotically stability of dynamical systems

378 Views Asked by At

I'm wondering if someone can provide a clarification between 2 seemingly opposing definitions from reputable sources on dynamical systems!

My Russian textbook, "Dynamical Systems I: Ordinary Differential Equations and Smooth Dynamical Systems" by Anosov, Arnol'd, Aronson, et al., says the following to determine whether a singular point of a dynamical system is asymptotically stable:

Theorem 4.2: If all eigenvalues of the linear part of a vector field $v$ at a singular point have negative real part, then the singular point is asymptotically stable.

To me, this means for any arbitrary dynamical system, say, $\dot{x} = f(x)$, where $x \in \mathbb{R}^{n}$, one can find where $f(x) = 0$, and solve the corresponding Jacobian for the eigenvalues of to determine stability. Further, if one finds that $\lambda_{i} < 0$, for $i = 1,2,...n$ then, this point is locally stable, by this theorem. But, is this theorem now suggesting that this is point is now asymptotically stable as well?

Almost every single textbook on ODEs that I have checked says to determine whether an equilibrium point is asymptotically stable, some more general method is required like constructing Lyapunov functions, determining limit sets, etc...

Why is there such a difference? Is there a difference?