Eigenvalues with positive real parts imply instability

1.4k Views Asked by At

Given an autonomous ode $\dot{x}=F(x)$ with $0$ being its equilibrium point, and all eigenvalues of $DF(0)$ have non-zero real parts.

I have learned that if the real parts of eigenvalues are all negative , then the system is asymptotically stable near $0$. Now I guess that if one of the eigenvalues has positive real part then the system is unstable near $0$, but I can't give a proof, can anyone help me?

Many thanks!

2

There are 2 best solutions below

0
On

You are comparing two systems of differential equations, the linear system $$ \dot v=Av $$ and the non-linear system $$ \dot x=F(x) $$ with $F(0)=0$ a stationary point at the origin and $DF(0)=F'(0)=A$ as the Jacobian. If $F$ is sufficiently smooth, say twice differentiable at least, the vector fields look similar close to the origin.

Now if one eigenvalue of $A$ is real and positive, or a conjugate eigenpair has a positive real part, then there are solutions of the linear system that connect the stationary point with infinity moving outwards. The question asks if the non-linear system is similar enough that solutions close-by inherit enough of this divergence to prevent this point from being stable.

The Hartman-Grobman theorem answers this in the most comprehensive way (lesser tools may be able to also answer this with more effort) by the claim that there is some diffeomorphism $\Phi$ mapping a neighborhood of the origin in the linear system to a neighborhood of the stationary point of the non-linear system (here also the origin) with $Φ(x)=x+O(|x|^2)$ so that $$x(t)=Φ(e^{At}v_0)$$ is the solution of $$\dot x=F(x)~~\text{ with }~~x_0=Φ(v_0)\iff v_0=Φ^{-1}(x_0)$$ for $x_0$ close to the fixed point $0$ and $t$ small enough.

Now if $A$ has an eigenvalue $\lambda=\alpha+i\beta$ with positive real part $α$, then there will be an eigenvector $u+iw$ leading to real solutions $e^{αt}(\cos(βt)u-\sin(βt)w)$ for the linear equation and thus $$ x(t)=Φ(v(t))=Φ\bigl(e^{αt}(\cos(βt)u-\sin(βt)w)\bigr) $$ (for a real positive eigenvalue just set $β=0$). Now if you fix some point $x(t_*)$ on this solution that is still inside the range of the diffeomorphism, then the points $x_0=x(t_0)$, $t_0<t_*$ can be selected arbitrarily close to the origin (select $β(t_*-t_0)\in 2\pi\Bbb Z$ to get points on the same ray to the origin). Thus you get initial points arbitrarily close to the origin that have solutions that pass through the same point $x(t_*)$ (relatively) far away from the origin. This is then a contradiction to both definitions of stability and asymptotic stability.

0
On

A proof of this result is given in Prof. Lebovitz's notes on Ordinary Differential Equations Chapter 8, and I paste his proof here.

Theorem 8.4.2 Consider the autonomous dynamical system \begin{equation} \dot x = Ax + g(x),~~~~x\in\Omega \label{system} \end{equation} where $\Omega$ is an open region in $\mathbb R^n$ which contains the origin, $g\in C^1(\Omega)$ and $g(x) = o(|x|)$ as $x\rightarrow0$. If one of the eigenvalues of $A\in\mathbb R^{n\times n}$ has a positive real part, then the origin is an unstable equilibrium point.

The proof is accomplished by splitting the the eigenvalues of $A$ into positive and negative parts. Let's assume the eigenvalues of $\lambda_1,\cdots,\lambda_{n_1}$ of $A$ have positive real parts, while the remaining eigenvalues $\lambda_{n_1+1},\cdots,\lambda_n$ have non-positive real parts. Let \begin{equation} \sigma = \min\{\Re(\lambda_1),\cdots,\Re(\lambda_{n_1})\} >0 \end{equation} be the lower bound of the real parts of $\lambda_1,\cdots,\lambda_{n_1}$. Now we apply the similarity transformation $B = P^{-1}AP$ to obtain the Jordan normal form of $A$, which is given in the block form \begin{equation} B = \begin{bmatrix} B_1 & 0 \\ 0 & B_2 \end{bmatrix} \in \mathbb C^{n\times n} \end{equation} and the matrices $B_1\in\mathbb C^{n_1\times n_1}$ and $B_2\in\mathbb C^{(n-n_1)\times(n-n_1)}$ satisfy

  • $B_1$ and $B_2$ possess the Jordan structure: eigenvalues appear on the diagonal, and a certain number of constants occur along a secondary diagonal;
  • Each eigenvalue of $B_1$ has a positive real part whereas no eigenvalue of $B_2$ has a positive real part;
  • The constants along the secondary diagonal are all equal to $\nu$, which is a fixed positive constant chosen to satisfy $\nu<\sigma/8$.

We make the change of variables $y = P^{-1}x\in D$, where $D:=\{P^{-1}x:x\in\Omega\}$ is a subset of $\mathbb C^n$. Note that the transformation matrix $P\in\mathbb C^{n\times n}$ may contain complex entries, hence $D$ can be not contained in $\mathbb R^n$. $h(y) = P^{-1} g(Py)$ is then well-defined as a $C^1$ mapping in $D$. We fix a constant $\epsilon_0 < \sigma/8$ and choose $\delta_0>0$ such that \begin{equation} 0<|y|<\delta_0\Longrightarrow |h(y)|<\epsilon_0|y| \end{equation} The original dynamical system now can be written as \begin{align*} \dot y_1 & = B_1 y_1 + h_1(y) \\ \dot y_2 & = B_2 y_2 + h_2(y) \end{align*} Define the norm of $y_1\in\mathbb C^{n_1}$ and $y_2\in\mathbb C^{n-n_1}$ by \begin{equation} R_1 = |y_1| = \bigg(\sum_{i=1}^{n_1} |y^{(i)}|^2 \bigg)^{\frac12},~~~~ R_2 = |y_2| = \bigg(\sum_{i=n_1+1}^{n} |y^{(i)}|^2 \bigg)^{\frac12} \end{equation} where $y^{(i)}$ denotes the $i$-th entry of $y\in\mathbb C^n$. Then we can verify \begin{equation} \frac{\mathrm{d}}{\mathrm{d}t}R_1^2(t) = 2\Re\bigg\{ \sum_{j,l=1}^{n_1} B^{(j,l)} \overline{y^{(j)}} y^{(l)} + \sum_{j=1}^{n_1} \overline{y^{(j)}} h^{(j)}(y)\bigg\} \label{dR1} \end{equation} In the equation above, we have

  • The diagonal part \begin{equation} 2\Re\bigg\{ \sum_{j=1}^{n_1} B^{(j,l)} |y^{(j)}|^2 \bigg\} \geqslant 2\sigma \sum_{j=1}^{n_1} |y^{(j)}|^2 = 2\sigma R_1^2 \end{equation}
  • The off-diagonal (secondary diagonal) part \begin{align*} \bigg|2\Re\bigg\{ \sum_{j=1}^{n_1-1} B^{(j,j+1)} \overline{y^{(j)}} y^{(j+1)} \bigg\}\bigg| & \leqslant 2\sum_{j=1}^{n_1-1} |B^{(j,j+1)}||y^{(j)}||y^{(j+1)}| \\ & \leqslant 2\nu\sum_{j=1}^{n_1} |y^{(j)}|^2 = 2\nu R_1^2 \end{align*}
  • The nonlinear part \begin{align*} \bigg| 2\Re\bigg\{ \sum_{j=1}^{n_1} \overline{y^{(j)}} h^{(j)}(y) \bigg\} \bigg| & \leqslant 2\sum_{j=1}^{n_1} |y^{(j)}| |h^{(j)}(y)| \\ & \leqslant 2 |y_1| |h_1(y)| \\ & \leqslant 2\epsilon_0 R_1 (R_1+R_2) \end{align*}

Hence we conclude \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}R_1(t) \geqslant (\sigma - \nu -\epsilon_0) R_1 - \epsilon_0 R_2 \end{equation} In a similar way, we can prove that \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}R_2(t) \leqslant (\nu+\epsilon_0) R_2 + \epsilon R_1 \end{equation} Putting these together gives \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}(R_1-R_2) \geqslant (\sigma - 2\nu - 2\epsilon_0) R_1 - 2\epsilon_0 R_2 \end{equation} Recall the choices $\nu,\epsilon_0 < \sigma/8$, we have \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}(R_1 - R_2) \geqslant \frac{\sigma}2(R_1-R_2) \end{equation} Let's choose the initial data such that $R_1>R_2$ at $t=0$, then the result above yields \begin{equation} \lim_{t\rightarrow\infty} (R_1(t)-R_2(t)) = +\infty \end{equation} A contradiction to the stability.