Finding input matrix

208 Views Asked by At

Suppose we are given the following SISO state space system: \begin{equation} \dot{x} =Ax + Bu \\ y = Cx \end{equation} The impulse response of this system is given as $x(t) = \begin{pmatrix}e^{-t} + e^{-2t}+e^{3t}\\2e^{-2t}-e^{3t}\\e^{-t}+e^{-2t}\end{pmatrix}$. I want to find $B$ matrix in this question. We know that the impulse response can be represented by $X(s) = (sI-A)^{-1}B$.

Do you have any ideas on how to approach this problem. Assuming a general $A$ and taking inverses etc. seems very unpractical.

2

There are 2 best solutions below

2
On BEST ANSWER

The state response $x(t)$ of a linear time invariant state space model, to an input $u(t)$, can be written as

$$ x(t) = e^{A\,t}x(0) + \int_0^t e^{A\,(t-\tau)} B\,u(\tau)\,d\tau. \tag{1} $$

For an impulse response it is assumed that the initial conditions are zero, i.e. $x(0)=0$, and the input is equal to the Dirac's delta function $u(t)=\delta(t)$. Now evaluating $(1)$ at $t=0^+$ (an infinitesimally small time step after zero) using the above yields

$$ x(0^+) = B. \tag{2} $$

Therefore, the $B$ matrix can be obtained by evaluating $x(0^+)$. It can also be noted that the impulse response is equivalent to having no input and using a non-zero initial condition equal to $B$.

0
On

I am going to describe several ways to solve the problem of finding the matrix $B$ and of establishing whether the system is controllable. I will provide a summary in which I discuss about how the controllability problem could have been shown with a simple look at the system.

Finding the matrix $B$

The impulse response $h$ from the input to the state is simply given in this case by

$$h(t)=\exp(At)B$$ and, as a result (and as Kwin van der Veen mentioned), we have that $$\lim_{t\downarrow 0}h(t)=B.$$

By identification, you can find $B$, which is given in this case by $$B=\begin{bmatrix} 3\\1\\2 \end{bmatrix}.$$

Finding the eigenvectors of $A$ (the hard way)

The eigenvalues of the system are $-1,-2$ and $3$. Those are obviously distinct and, therefore, the matrix $A$ is diagonalizable and one of the possible associated diagonal matrices is given by $D=\mathrm{diag}(-1,-2,3)$, for instance. We also assume here that the matrix $B$ is not known and we define it as $$B=\begin{bmatrix} b_1\\b_2\\b_3 \end{bmatrix}.$$

We know that $$AP=PD$$ where $P$ is a matrix of eigenvectors. Therefore, we have that $$\exp(At)B=P\exp(Dt)P^{-1}B,$$ or, equivalently, that $$P^{-1}\exp(At)B=\exp(Dt)P^{-1}B,$$ which is an expression which is linear in $P^{-1}$. Let $Q:=P^{-1}$ and define $q_i$ to be the $i$-th row of $Q$.

We have that $$\begin{array}{rcl} Q\exp(At)B=&Q&\begin{bmatrix}1 & 1 & 1\\0 & 2 & -1\\1 & 1 & 0\end{bmatrix}\begin{bmatrix}e^{-t}\\e^{-2t}\\e^{3t}\end{bmatrix}\\ % &=&\begin{bmatrix} q_{11}+q_{13} & q_{11}+2q_{12}+q_{13} & q_{11}-q_{12}\\ q_{21}+q_{23} & q_{21}+2q_{22}+q_{23} & q_{21}-q_{22}\\ q_{31}+q_{33} & q_{31}+2q_{32}+q_{33} & q_{31}-q_{32} \end{bmatrix}\begin{bmatrix}e^{-t}\\e^{-2t}\\e^{3t}\end{bmatrix} \end{array}$$ and $$\exp(Dt)P^{-1}B=\begin{bmatrix} q_1^TB & 0 & 0\\ 0 & q_2^TB & 0\\ 0 & 0 & q_3^TB \end{bmatrix}\begin{bmatrix}e^{-t}\\e^{-2t}\\e^{3t}\end{bmatrix}.$$

Equating both expression yields $$\begin{bmatrix} q_{11}+q_{13}-q_1^TB & q_{11}+2q_{12}+q_{13} & q_{11}-q_{12}\\ q_{21}+q_{23} & q_{21}+2q_{22}+q_{23}-q_2^TB & q_{21}-q_{22}\\ q_{31}+q_{33} & q_{31}+2q_{32}+q_{33} & q_{31}-q_{32}-q_3^TB \end{bmatrix}=0.$$

The first row is equivalent to $$\begin{bmatrix} 1 & -1 & 0\\ 1 & 2 & 1\\ 1-b_1 & -b_2 & 1-b_3 \end{bmatrix}\begin{bmatrix} q_{11}\\q_{12}\\q_{13} \end{bmatrix}=0.$$

There exists a non-trivial solution if and only if the determinant of the matrix is zero, that is, if and only if $b_1+b_2-3b_3+2=0$.

The second row is equivalent to $$\begin{bmatrix} 1 & -1 & 0\\ 1 & 0 & 1\\ 1-b_1 & 2-b_2 & 1-b_3 \end{bmatrix}\begin{bmatrix} q_{11}\\q_{12}\\q_{13} \end{bmatrix}=0.$$

There exists a non-trivial solution if and only if $b_1 + b_2 - b_3 - 2=0$.

Finally, the third and last row is equivalent to $$\begin{bmatrix} 1 & 0 & 1\\ 1 & 2 & 1\\ 1-b_1 & -1-b_2 & -b_3 \end{bmatrix}\begin{bmatrix} q_{11}\\q_{12}\\q_{13} \end{bmatrix}=0.$$

There exists a non-trivial solution if and only if $b_1 - b_3 - 1=0$.

Therefore, we end up with the system of equations: $$\begin{bmatrix} 1 & 1 & -3\\ 1 & 1 & -1\\ 1 & 0 & -1 \end{bmatrix}B=\begin{bmatrix}-2\\2\\1 \end{bmatrix},$$

which yields $B=(3,1,2)$, and we retrieve the previously obtained value for $B$.

From the obtained values for rows $q_1,q_2$ and $q_3$ of $Q$, we find the following matrices:

$$Q=\begin{bmatrix} -1 & -1 & 3\\ -1 & -1 & 1\\ -1 & 0 & 1 \end{bmatrix},\ P=\begin{bmatrix} 1/2 & -1/2 & -1\\ 0 & -1 & 1\\ 1/2 & -1/2 & 0 \end{bmatrix},\ A=PDQ=\begin{bmatrix} 2.5 & -0.5 & -3.5\\ -5 & -2 & 5\\ -0.5 & -0.5 & -0.5 \end{bmatrix}$$

Finding the eigenvectors of $A$ (the easy way)

Let $(v_1,v_2,v_3)$ be the eigenvectors associated with the eigenvalues $\lambda_1=-1$, $\lambda_2=-2$, and $\lambda_3=3$. It is known that $(v_1,v_2,v_3)$ forms a basis of $\mathbb{R}^3$. So, we can choose (Why?) those vectors such as

$$B=v_1+v_2+v_3.$$

We, therefore, have that $$h(t)=\exp(At)B=v_1e^{-t}+v_2e^{-2t}+v_3e^{3t},$$ from which we directly obtain that

$$v_1=\begin{bmatrix}1\\0\\1 \end{bmatrix},\ v_2=\begin{bmatrix}1\\2\\1 \end{bmatrix},\ v_3=\begin{bmatrix}1\\-1\\0 \end{bmatrix},$$

which are the same basis vectors as obtained before.

Using the PBH test for establishing controllability

We can then use the PBH test for checking the controllability. Since there are not zero entries in $$P^TB=\begin{bmatrix}2.5\\-3.5\\-2\end{bmatrix}$$ and the eigenvalues are simple, then this implies that the system is controllable.

Using the controllability matrix test for establishing controllability (the hard way)

Now that we have found the matrices $A$ and $B$, we can consider the controllability matrix which is given by $$\mathcal{C}(A,B)=\begin{bmatrix} 3 & 21 & 51\\ 1& -29& -35\\ 2 & 2& -10 \end{bmatrix}.$$ Immediate calculations show that this matrix is full-rank, which means that the system is controllable.

Using the controllability matrix test for establishing controllability (the easy way)

In fact, the controllability matrix can be directly computed from the impulse response $h(t)$ without all the above hassle by noting that

$$h^{(i)}(t)=A^i\exp(At)B$$ and thus

$$h^{(i)}(0)=A^iB,$$

where we can recognize the columns of the controllability matrix. Therefore, we have that $$\mathcal{C}(A,B)=\begin{bmatrix} h(0) & h'(0) & h''(0)\end{bmatrix},$$ from which we find exactly the same results as using the other methods without many calculations.

Using the controllability Gramian

Another approach is based on the calculation of Gramians. The issue here is that one eigenvalue is unstable but we can shift the eigenvalues of the system without changing its controllability properties; i.e. the pair $(A,B)$ is controllable if and only if the pair $(A-\alpha I,B)$ is controllable where $\alpha\in\mathbb{R}$.

Picking, for instance, $\alpha=4$, makes the matrix $A-\alpha I$ Hurwitz stable, one one can use the controllability Gramian. Define

$$H=\begin{bmatrix} 1 & 1 & 1\\0 & 2 & -1\\1 & 1 & 0 \end{bmatrix},$$

and we have that $h(t)=\exp(At)B=Hu(t)$ where $u(t)=(e^{-t},e^{-2t},e^{3t})$. Therefore, we have that $e^{-4t}\exp(At)B=H\bar u(t)$ where $\bar u(t)=(e^{-5t},e^{-6t},e^{-1t})$.

The controllability Gramian is given by

$$\begin{array}{rcl} W_c&=&\int_0^\infty H\bar u(t)\bar u(t)^TH^Tdt\\ &=& H\left(\int_0^\infty\bar u(t)\bar u(t)^Tdt\right)H^T\\ &=& \begin{bmatrix} 1 & 1 & 1\\0 & 2 & -1\\1 & 1 & 0 \end{bmatrix}\begin{bmatrix} 1/10 & 1/11 & 1/5\\1/11 & 1/12 & 1/6\\1/5 & 1/6 & 1/2 \end{bmatrix}\begin{bmatrix} 1 & 1 & 1\\0 & 2 & -1\\1 & 1 & 0 \end{bmatrix}^T \end{array}$$

We have that $H$ is invertible, so the system is controllable if and only if the central matrix is nonsingular. It can be check that the smallest eigenvalue is around 0.00018, which means that the matrix is closed to singularity. Although, the system is controllable, it is close to non-controllability. In fact, the determinant of the controllability Gramian is approximately $3.1234\cdot 10^{-6}$.

A final comment using linear (in)dependence

What is interesting, in the end, is that all those calculations are useless. Why? Because what is really important here is that the entries of $\exp(At)B$ be linearly independent.

To explain this, let us define the impulse response of the system has

$$h(t)=H\exp(Dt)\mathbf{1}$$ where $H\in\mathbb{R}^{n\times n}$, $D$ is a diagonal matrix containing the eigenvalues of $A$ (assumed to be diagonalizable with distinct eigenvalues) and $\mathbf{1}$ is a vector of ones.

Then, we have that

$$\mathcal{C}(A,B)=\begin{bmatrix}H\mathbf{1} & HD\mathbf{1} & \ldots & HD^{n-1}\mathbf{1} \end{bmatrix}.$$

Clearly, if $H$ is not full rank, then there exists a vector $v$ such that $v^TH=0$ and, therefore, $v^T\mathcal{C}(A,B)=0$, showing that the system is not controllable. This proves that a necessary condition for the controllability matrix to be matrix to be full-rank is that $H$ is full-rank as well. To prove the converse, we note that

$$\mathcal{C}(A,B)=H\begin{bmatrix}\mathbf{1} & D\mathbf{1} & \ldots & D^{n-1}\mathbf{1} \end{bmatrix}.$$ The second matrix is known to be a Vandermonde matrix, which is know to be nonsingular since the eigenvalues of $D$ are distinct. Hence, if $H$ is full-rank, then $\mathcal{C}(A,B)$ is full rank.

This connect well with the fact that we need $H$ to be full-rank for the Gramian to be full-rank. Also, linear independence of the functions $e^{-t}$, $e^{-2t}$, and $e^{3t}$ implies that the central matrix in the Gramian expression be positive definite.

In the specific case considered here, lack of controllability would mean that $H$ is not full-rank which means that the states depend on each other, making them not controllable independently of each other.