Get $A$ and $C$ matrix from Observability matrix

346 Views Asked by At

Assume that we have our observability matrix.

$$O_{obsv} = \begin{bmatrix} C\\ CA\\ CA^2\\ \vdots\\ CA^{n-1} \end{bmatrix}$$

We know the output vector $y(t) \in \mathbb{R}^i$ from the system. We know the dimension of $A \in \mathbb{R}^{n\times n}$. We also know the matrix $C \in \mathbb{R}^{i\times n}$'

We cannot use $$A = C^{-1}CA$$

because $C$ is not square.

Question: How can we find $A$ from $CA$ if we know $C$ and dimension of $A$?

Edit:

I'm asking about system identification when you can estimate a state space model by using measured data - input and output. This algorithm is called MOESP.

Assume that we know $u(k) \in \mathbb{R}{p}$ and $y(k) \in \mathbb{R}^{q}$

Then we can create our Hankel matrices.

$$U = \begin{bmatrix} u(0) & u(1) & \dots & u(k-j) \\ u(1) & u(2) & \dots & u(k) \\ \vdots & \vdots & \ddots & \vdots \\ u(k-1) & u(k) & \dots & u(k+i+j -2) \end{bmatrix}$$

$$Y = \begin{bmatrix} y(0) & y(1) & \dots & y(k-j) \\ y(1) & y(2) & \dots & y(k) \\ \vdots & \vdots & \ddots & \vdots \\ y(k-1) & y(k) & \dots & y(k+i+j -2) \end{bmatrix}$$

From $Y, Y$ we can find $R22$ from QR decomposition:

$$\begin{bmatrix} U\\ Y \end{bmatrix} = \begin{bmatrix} R_{11} & 0 \\ R_{21} & R_{22} \end{bmatrix}\begin{bmatrix} Q_1\\ Q_2 \end{bmatrix}$$

Then we use Singular Value Decomposition(SVD)

$$R_{22} = \begin{bmatrix} U_1 & U_2 \end{bmatrix} \begin{bmatrix} \sigma _1 & 0\\ 0 & 0 \end{bmatrix}\begin{bmatrix} V_1^T\\ V_2^T \end{bmatrix}$$

We can find our dimension of the system by using

$$nx = size(\sigma _1) $$

But if we want to reduce the noise, we need to plot the sigular values $\sigma _1$ and see how many they are. Example if 4 values of $\sigma _1$ is large and the other values of $\sigma _1$ is small, then our real model will be $nx = 4$. Very smart method to reduce noise.

Anyway! Our Extended observability matrix can be found by

$$O_{obsv} = U_1*\sqrt{\sigma_1}$$

Then our $C$ matrix can be found from:

$$C = O_{obsv}(1:q, 1:nx)$$

And our $A$ matrix can be found from:

$$A = O_{obsv}^{\dagger}(1:q(k-1), 1:nx)O_{obsv}(q+1:kq, 1:nx)$$

because

$$O_{obsv}(1:q(k-1), 1:nx)A = O_{obsv}(q+1:kq, 1:nx)$$

I don't know If i can trust this "finding A-matrix"-method, because I don't understand it. I know what pesudo-inverse by More-Penrose is.

So I need help to understand this "finding A-matrix"-method.

Thank you.

1

There are 1 best solutions below

5
On BEST ANSWER

You can't from $C A$. Take

$$ C = \begin{bmatrix} 0 & 1 \end{bmatrix},\, A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \,. $$

Now calculate $C A$:

$$ C A = \begin{bmatrix} a_{21} & a_{22} \end{bmatrix} \,. $$

Both $a_{11}, a_{12}$ are not part of $C A$. It's like an equation

$$ x = 0 \cdot a $$

determine $a$, given $x$. No unique solution exists, because $a$ can be chosen arbitrary.