Stability without Lyapunov methods

203 Views Asked by At

I've been having some problems trying to solve a problem which appears in the book im following, so any help would be really appreciated.

Defn. A fixed point $x_{0}$ is asymptotically stable if it is stable and (1) if there is a neighborhood $U$, s.t. $x_{0} \in U$ and $|\phi(t,x)-x_{0} | \rightarrow 0$ as $t \rightarrow \infty$.

Problem. let $\dot x = x - y - x(x^{2}+y^{2}) + \frac{xy}{\sqrt{x^{2}+y^{2}}}$ and $\dot y = x + y - y(x^{2}+y^{2}) - \frac{x^{2}}{\sqrt{x^{2}+y^{2}}}$. I have to show that $(1,0)$ is not stable even though it satisfies (1).

My solution. First, I change the cartesian coord. to polar coordinates. Then our new system is $\dot r = r-r^{3}$ and $\dot \theta = 2sin^{2}(\theta/2)$. The point we're studying stays the same after the transformation, $(r,\theta)=(1,0)$. Now, my problems begin, I tried to apply the linearization theorem and we got that our Jacobian Matrix will be

\begin{bmatrix}1-3r^{2}&0\\0&2sin(\theta/2)cos(\theta/2)\end{bmatrix}

and evaluating $(1,0)$

\begin{bmatrix}-2&0\\0&0\end{bmatrix}

then its eigenvalues are $\lambda_{1,2}=-2,0$ and this implies that this is no good por my analysis cause we can say nothing (right?). And if we could say that it is unstable by this analysis, I havent been able to prove that (1) holds.

The other way I think we could prove this is by definition directly, but since I couldn't find the flow for the ODE I cant try the definitions.

So if you guys could help me with it I'd more than glad.

Thanks so much in advance, I really appreciate it. <3

2

There are 2 best solutions below

8
On BEST ANSWER

I assume that your change of coordinates is well defined. I did not check it but it is something you should really do when using change of coordinates. Can you write here the transformation you used?

Remeber the definition of stability of a point p:

$\forall \epsilon>0 , \exists \delta>0$ s.t. $\forall x_0\in B_{\delta}(p), \phi(t, x_o) \in B_{\epsilon}(p)$ for all time $t>0$.

Down here I will write symbolically the idea that no matter how close to your equilibrium you start, you can choose an $\epsilon$ ball that does not contain the flow (you can think of this as a definition of instability):

$\exists \epsilon>0 , \forall \delta>0$ $\exists x_0\in B_{\delta}(p), \phi(t, x_o) \notin B_{\epsilon}(p)$ for some time $t>0$.

Now, fix $\epsilon=1/2$, $p=(1,0)$. Remember that if you choose a point on the circumference with radius $1$, the flow stays on the circumference since $\dot{r}=0$, therefore we are only interested in the $\theta$ dynamics.

Consider a generic ball $B_{\delta}(p)$. No matter how you choose this ball, you can always find a point $x_0=(1, \theta_0) \in B_{\delta}(p)$. (I suggest you to find an explicit way of finding such an $x_0$).

Therefore $\phi(t, x_0)$ will have the form $(1, \theta(t))$. The final step consists in showing that such a flow 'goes out' of the $\epsilon$-ball! For example, we can show that in finite time the point will reach the point $(1, \pi/2 )$ that is clearly out of our $\epsilon=1/2$-ball.

How do we show this?

  1. $\theta(t)$ is monotone non-decreasing since $sin^2(θ/2)\geq0$
  2. $\dot{\theta}(t)\geq sin^2(θ_0/2)\ \forall \theta \in [\theta_0, \pi/2]$

Since $\theta(t)$ is monotone non decreasing, it will rather (a) go to infinity or (b) have a finite limit $\tilde{\theta} $. In case (a) you are done, you will definitily go out of your $\epsilon$-ball.

Case (b) requires a small argument. We claim here that $\tilde{\theta} > \pi/2$.

Assume (by contraditction) $\tilde{\theta}\leq\pi/2$. Then $\theta(t) \in [\theta_0, \tilde{\theta}]\ \forall t>0$.

Recalling 2. :

$\dot{\theta}(t)\geq sin^2(θ_0/2) \implies \theta(t) \geq \theta_0 + sin^2(\theta_0)t$

The last inequality it is clearly a contradiction: $\theta$ grows unbounded, against the initial claim that $\theta(t)$ had a finite limit.

Remark: In the last proof by contradiction we did not prove that there is no finite limit for $\theta(t)$, but rather that this limit can not be smaller than $\pi/2$. Indeed, one can prove that $\theta$ converges to $2\pi$. As an exercise you could try to prove this fact with a similar contradiction argument. It is a common argument that is used (for example) to show asymptotic stability of an equilibrium using Lyapunov Functions!

For the proof of stability, we can actually show that given any point in any delta ball around $p$, the solution will converge to (1,0). Indeed, we can prove that $\theta$ goes to $2\pi$ by simply adapting the argument above. For the convergence of radius, consider that for $r<1$ $r(t)$ is monotone increasing and bounded from above. The limit is clearly $r=1$. Analogous result is for $r>1$. Actually the function is odd, so the flow is simmetric.

1
On

I am assuming your transformation is correct.

If you look at

$$\dot{r}=r(1-r^2)=r(1-r)(1+r).$$

You can identify three values for $r$ which lead to a vanishing derivative of $r$. For $r<-1$ the derivative is positive. For $-1<r<0$ the derivative is negative. For $0<r<1$ the derivative is positive and for $1<r$ the derivative is negative. Hence, in the $r=1$ coordinate we seem to be asymtotically stable.

Now, use the second equation and do the same resoning. Note that it is very likely that we have a limit cycle $r=1$. Which explains the $0$ eigenvalue in the linearized eqautions. Check if you take the derivative of $1=x^2+y^2$ that you get an true statement. If that is the case then you can conclude that the circle is really a limit cycle.