Schur complement

73 Views Asked by At

I am trying to understand what steps need to be done to go from $P-A^TPA\succ0$ (with $P \succ 0$ and $G$ can be any matrix) to

$$\begin{bmatrix} P & A^TG^T \\ GA& G + G^T - P \end{bmatrix} \succ 0$$

Trying to use the proof

$$T\begin{bmatrix} P & A^TG^T \\ GA& G + G^T - P \end{bmatrix}T^T = P-A^TPA \succ0$$

with $T = \begin{bmatrix} I & -A^T \end{bmatrix}$ I am only able to get $P-A^TGA-A^T(G-P)A^T$. If $ G = P$, then it should hold. Since $G$ can be any matrix, his can't hold, or am I wrong?


Edit: I am reading the paper "A new discrete-time robust stability condition" by M.C. de Oliveira, J. Bernussou, J.C. Geromel.

A linear time-discrete system is given $x_{k+1} = A(\alpha) + B(\beta)u_k$, for which: $A(\alpha) \quad \in \quad\mathcal{X}_A := \{A(\alpha): A(\alpha) := \sum_{i = 1}^{N} \alpha_iA_i,\sum_{i = 1}^{N} \alpha_i = 1, \alpha_i \geq 0\}$ $B(\beta) \quad \in \quad\mathcal{X}_B := \{B(\beta): B(\beta) := \sum_{i = 1}^{N} \beta_iB_i,\sum_{i = 1}^{N} \beta_i= 1, \beta\geq 0\}$

With discrete-time ljapunov function $V_k(\alpha)= x^T_kP(\alpha)x_k$, a system $x_{k+1} = Ax_k$ is stable if: \begin{align*} V_k(\alpha) &= x^T_kP(\alpha)x_k \succ 0 \implies P(\alpha) \succ 0\\ \Delta V_k(\alpha) &= V_{k+1}(\alpha) - V_{k}(\alpha) \prec 0\\ &= x^T_{k+1}P(\alpha)x_{k+1} - x^T_kP(\alpha)x_k = (A(\alpha)x_k)^TP(\alpha)(A(\alpha)x_k) - x^T_kP(\alpha)x_k\\ &= x^T_kA^T(\alpha)P(\alpha)A(\alpha)x_k - x^T_kP(\alpha)x_k\\ &= x^T_k((A^T(\alpha)P(\alpha)A(\alpha) - P(\alpha))x_k \prec 0 \implies (A^T(\alpha)P(\alpha)A(\alpha) - P(\alpha)) \prec 0 \end{align*} When using the Q Schur compliment and the state controller: \begin{align*} u_k &= Kx_k\\ x_{k+1} &= A(\alpha)x_k + B(\beta)u_k = A(\alpha)x_k + B(\beta)Kx_k \\ &= \underbrace{(A(\alpha) + B(\beta)K)}_{=A(\alpha,\beta)}x_k = A(\alpha,\beta)x_k \end{align*}

\begin{align*} \underbrace{P(\alpha,\beta)}_{R} - \underbrace{A^T(\alpha,\beta)P(\alpha,\beta)}_{S^T}\underbrace{P(\alpha,\beta)^{-1}}_{Q^{-1}}\underbrace{P(\alpha,\beta)A(\alpha,\beta)}_{S} &\succ 0 \iff \begin{bmatrix} P(\alpha,\beta) & P(\alpha,\beta)A(\alpha,\beta) \\ A^T(\alpha,\beta)P(\alpha,\beta) & P(\alpha,\beta) \end{bmatrix} \succ 0 \end{align*} The paper says:

Theorem 1. The following conditions are equivalent:

  1. There exists a symmetric matrix $P \succ 0$ such that: $APA − P \prec 0$.

  2. There exist a symmetric matrix $P$ and a matrix $G$ such that: $\begin{bmatrix} P & PA \\ A^TP & P \end{bmatrix} \succ 0$

I am trying to understand the proof of the paper with:

$$T\begin{bmatrix} P & A^TG^T \\ GA& G + G^T- P \end{bmatrix}T^T = P-A'PA \succ0$$

with $T = \begin{bmatrix} I & -A^T \end{bmatrix}$ and I am trying to do it step by step, yet I am always ending with

$$T\begin{bmatrix} P & A^TG^T \\ GA& G + G^T - P \end{bmatrix}T^T =P-A^TGA-A^T(G-P)A^T$$

I am not sure if I understand the proof correctly or if I am only making a calculation error.