Lyapunov analysis of marginally stable linear systems

383 Views Asked by At

Before I start, I want to emphasize I'm dealing with marginally stable linear systems, so many theorems about stable systems simply do not apply.

Let $\dot{x} = Ax$ be a marginally stable. That is, for any $x(0) \in \mathbb{R}^n$, there exists $M > 0$ such that $\|x(t)\|_2 \leq M$ for all $t \geq 0$. This is equivalent to saying that all eigenvalues of $A$ have non-positive real part, and the eigenvalues with zero real part have $1 \times 1$ Jordan blocks. (I am particularly dealing with a matrix $A$ with real eigenvalues, and all but one eigenvalues are negative, and the the remaining (simple) eigenvalue is $0$.)

This manuscript states $A$ is marginally stable if and only if there exists $Q \geq 0, P >0$ such that $$A^TP + PA = -Q.$$

My question is, is there a procedure to find such $Q, P$?

Edit: The linked manuscript has incorrect statements. @user1551 cleared everything up in their answer.

2

There are 2 best solutions below

8
On BEST ANSWER

Here is the theorem (on p.9 of the document) that you referred to:

Theorem. SISL for LTI CT Systems. Let $Q$ be a symmetric positive semidefinite matrix. Then the system $\dot{x}=Ax$ is SISL (e.g. marginally stable) if and only if the (symmetric) matrix $P$ which solves the CT Lyapunov equation $$ A^TP+PA=-Q $$ is positive definite.

This theorem is wrong:

  1. The matrix $P$ that solves the matrix equation is not always positive definite. Consider e.g. $A=\pmatrix{0&-1\\ 1&0},\,P=-I$ and $Q=0$. The $P$ in this counterexample is not even positive semi-definite.
  2. The theorem statement is ambiguous. It does not specify whether the matrix equation is always solvable or not, given Lyapunov stability. In fact, the equation is not always solvable. E.g. when $A=\pmatrix{0&-1\\ 1&0}$, the sum $A^TP+PA$ is traceless. Hence the only positive semi-definite matrix $Q$ that this sum can be equal to is zero.

I have not any training in control theory, but I find the definitions of marginal stability in the literature (not just in the linked document) rather unclear. While all definitions (correctly) require the eigenvalues of $A$ to have non-positive real parts, they do not agree on what happens on the imaginary axis. The authors of the linked document seem to require that all eigenvalues of $A$ on the imaginary axis, if any, to be simple, but they do not require $A$ to have at least one eigenvalue on the imaginary axis. In contrast, on p.7 of this handout by Eugene Lavretsky, marginal stability is defined (more reasonably, I believe) as “stable but not asymptotically stable”. For the system $\dot x=Ax$, this means that on the imaginary axis, (1) there is at least one eigenvalue of $A$, and (2) repeated eigenvalues are allowed, but these eigenvalues must be semi-simple. For some curious reason, while the Wikipedia entry on marginal stability also defines marginal stability as “stable but not asymptotically stable”, in terms of eigenvalues, it somehow states that the eigenvalues on the imaginary axis must be simple. I think Wikipedia is erred here, because for every marginally stable system $\dot x=Ax$, the system $\dot x=\pmatrix{A\\ &A}x$ (where the length of the vector $x$ is doubled) should also be marginally stable.

Anyway, let us adhere to Eugene Lavretsky's definition of marginal stability. If $A$ is marginally stable, then by considering the real Jordan form of $A$, we have $$ A=SA_1S^{-1}=S\,\underbrace{\pmatrix{B\\ &K}}_{A_1}\,S^{-1} $$ for some stable matrix $B$ and non-empty skew-symmetric matrix $K$. It follows that the equation $A^TP+PA=-Q$ is equivalent to $A_1^TP_1+P_1A_1=-Q_1$ where $P_1=S^TPS$ and $Q_1=S^TQS$. This can be further rewritten as $$ \pmatrix{B^T\\ &K^T}\underbrace{\pmatrix{X&Y\\ Y^T&Z}}_{P_1}+\pmatrix{X&Y\\ Y^T&Z}\pmatrix{B\\ &K}=-Q_1\tag{1} $$ Since $K^TZ+ZK$ is traceless, the equation is solvable only if $Q_1$ is in the form of $\pmatrix{Q_2\\ &0}$ for some PSD matrix $Q_2$ of the same size as $B$. When this is the case, the general solution to $(1)$ is given by $P_1=\pmatrix{X\\ &Z}$ where $X$ is the unique PSD solution to $B^TX+XB=-Q_2$ and $Z$ is any symmetric matrix that commutes with $K$ (in particular, one may take $Z$ to be any positive/zero/negative multiple of the identity matrix). It follows that (1) both PSD and non-PSD solutions always exist, (2) the solutions are always non-unique even if $P$ is required to be positive definite, and (3) positive definite solutions exist if and only if $Q_2$ is positive definite.

0
On

It can be noted that a Lyapunov equation, of the form

$$ A^\top P + P\,A = -Q, \tag{1} $$

is special form of a Sylvester equation. Such equations can be solved by transforming it into a system of linear equation using vectorization and the Kronecker product. Using this it can be shown that $(1)$ is equivalent to

$$ \underbrace{\left(I \otimes A^\top + A^\top\!\otimes I\right)}_M\,\text{vec}(P) = -\text{vec}(Q), \tag{2} $$

with $I$ the identity matrix of the same size as $A$.

There is only a unique solution for $P$ when $M$ nonsingular, which is the case when the spectra of $A^\top$ and $-A$ are disjoint. This is not the case when $A$ has eigenvalues at zero or complex conjugate eigenvalues on the imaginary axis. So in your case $M$ would be singular, because $A$ has one eigenvalue at zero. However, this still leaves the possibility of instead having zero or infinite solutions, depending of the value of $Q$.

One has infinitely many solution when $\text{vec}(Q)$ lies in the span of $M$. The corresponding infinite solutions for $\text{vec}(P)$ can be obtained by adding any linear combination of the null space of $M$ to a solution.


For example consider the following

$$ A = \begin{bmatrix} -1 & 1 \\ 0 & 0 \end{bmatrix}, \tag{3} $$

which has one negative eigenvalue and one eigenvalue of zero. Plugging this $A$ into the expression for $M$ yields

$$ M = \begin{bmatrix} -2 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 1 & 0 \end{bmatrix}. \tag{4} $$

A span for the above matrix is given by

$$ \text{span}(M) = \left\{ \begin{bmatrix}1 \\ 0 \\ 0 \\ -1\end{bmatrix}, \begin{bmatrix}0 \\ 1 \\ 0 \\ -1\end{bmatrix}, \begin{bmatrix}0 \\ 0 \\ 1 \\ -1\end{bmatrix}\right\}. $$

Therefore, matrices $Q$ for which $(1)$ does have a solution using $(3)$ has to be of the form

$$ Q = \begin{bmatrix} a & b \\ c & -a-b-c \end{bmatrix}. $$

However, $Q$ should be symmetric and positive semi-definite. So a possible matrix would be

$$ Q = \begin{bmatrix} 2 & -2 \\ -2 & 2 \end{bmatrix}. \tag{5} $$

A vector, $\text{vec}(P)=v$ that would solve $(2)$ using $(3)$ and $(5)$ is

$$ v = \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}. $$

The null space of $(4)$ is given by

$$ \text{null}(M) = \left\{ \begin{bmatrix}0 \\ 0 \\ 0 \\ 1\end{bmatrix}\right\}. $$

Therefore, a possible positive definite matrix $P$ that solves $(1)$ using $(3)$ and $(5)$ is

$$ P = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix}. $$