Lyapunov equation: Relationship between the eigenvalues of P and Q

2k Views Asked by At

Studying some problems in stability of linear/nonlinear systems I found this question would be interesting. I think this is related but I would like to address the problem from a more constructive point of view.

Consider $A$ be a Hurwitz matrix (real part of eigenvalues strictly smaller than 0) and the classic Lyapunov equation:

$A^{T}P + PA = -Q,\ Q>0$

We already know that the solution $P$ of the equation above is unique and positive definite. I was asking myself: can we choose $Q$ such that the eigenvalues of $P$ are in a certain region? If yes how? What are the constraints that the $A$ matrix (Hurwitz in this case) impose on this problem?

Edit: I think that if $A$ is diagonalizable, then we have some interesting to say (will dig a bit more). Maybe this approach could be extended to the non diagonalizable case.

1

There are 1 best solutions below

3
On BEST ANSWER

This isn't a complete answer, but would be too long for a comment. Namely, there are two properties that can be shown to be true for the Lyapunov equation:

Firstly, one can always find some $Q$ such that $Q = \alpha\,P$. One can approximate this by using $Q = \alpha\,P + \beta\,I$ and let $\beta$ go to zero. Substituting this into the Lyapunov equation yields

\begin{align} A^\top P + P\,A &= -\alpha\,P - \beta\,I, \\ A^\top P + \frac{\alpha}{2}P + P\,A + \frac{\alpha}{2}P &= -\beta\,I, \\ \left(A + \frac{\alpha}{2}I\right)^\top P + P \left(A + \frac{\alpha}{2}I\right) &= -\beta\,I. \end{align}

The largest value for $\alpha$ such that this modified Lyapunov equation can be solved for a positive definite $P$ would be such that $A + \frac{\alpha}{2}I$ just barely remains Hurwitz. Therefore, $\alpha$ can not be equal to or greater than the smallest real part of the eigenvalues of $-2\,A$. One can easily show that if $Q_1$ has the solution $P_1$, then scaling it $Q_2=\gamma\,Q_1$ has the solution scaled by the same factor $P_2=\gamma\,P_1$. Therefore, one might as well just use $\beta=1$ and scale the resulting $P$ if desired. Though, it can be noted that as $\alpha$ approaches its upper limit the associated solution grows without a bound. Namely, the effective system matrix $A + \frac{\alpha}{2}I$ would be getting "closer" to being unstable/marginally stable and $P$ can be seen as a cost defined as

\begin{align} &J(x(0)) = \int_0^\infty \beta\,x^\top\!(t)\,x(t)\,dt = x^\top\!(0)\, P\,x(0), \\ &\text{s.t.} \ \dot{x}(t)=\left(A + \frac{\alpha}{2}I\right)\,x(t). \end{align}

So as the solution for $P$ grows larger the contribution of $\beta\,I$ to $Q$ becomes negligible and in the limit as $\alpha$ going to its upper bound $Q$ should go to $\alpha\,P$.

Secondly, similar to the infinite time observability/controllability gramians the Lyapunov equation can also be written as the following integral, similar to the previously defined cost function,

$$ P(Q) = \int_0^\infty e^{A^\top\,t} Q\,e^{A\,t}\,dt. $$

Now if we want to maximize the smallest eigenvalue of $Q$ divided by the largest eigen value of $P$ the optimal (normalized) value for $Q$ would be the identity matrix. Here the normalized value is defined by constraining the smallest eigenvalue to one, which can be justified using the scaling property of the solution to a Lyapunov equation as defined earlier. This already gives a value for the smallest eigenvalue of $Q$, so for the optimal solution the largest eigen value of $P$ has to be minimized. By using the integral for of the Lyapunov equation and the normalization of $Q$ it can be shown that

$$ P(Q) - P(I) = \int_0^\infty e^{A^\top t} \underbrace{\left(Q - I\right)}_{\succeq 0} e^{A\,t}\,dt \succeq 0, $$

so $\lambda_\text{max}(P(Q)) \geq \lambda_\text{max}(P(I))$ and thus largest eigen value of $P$ is minimized when $Q=I$.