Relation between controllability and stabilization of a system

611 Views Asked by At

Suppose i have a control system which is described as:

$$ \left\{ \begin{array}{c} \dot{x}(t)=Ax(t)+Bu(t)\\ y(t)=Cx(t)+Du(t) \end{array} \right. $$

and i know it is controllable. I use a state feedback $u(t) = -Kx(t)+r(t)$ on it, which makes it:

$$\dot{x}(t)=(A-BK)x(t)+Br(t)$$

From what i understand, even if $A$ is not stable (meaning not all the real parts of its eigenvalues are negative), if I use the state feedback it is always possible to make it stable because it is controlable. why is this true? and how can i see this for $x\in\mathfrak{R}^n$. (no full proof is needed, a general direction is good enough)

I understand that the requirement for stability is that $\hat{A}=(A-BK)$ would have only eigenvalues with real parts that are negative, but why is this always possible?

enter image description here

3

There are 3 best solutions below

0
On BEST ANSWER

For single input systems you can directly calculate the $k^T \in \mathbb{R}^n$ vector such that $A-bk^T$ has a desired characteristic polynomial using Ackermann's formula, if the system is controllable. To see why it works there is a nice derivation here: http://www.cambridge.org/us/features/chau/webnotes/chap9acker.pdf

For multiple input systems you can select an arbitrary fan-out vector $f \in \mathbb{R}^m$ and calculate $k^T$ for the system $(A, Bf)$, which is a single input system now. This is called dyadic approach and $f$ can be selected for some additional design criteria, such as robustness. Then the state feedback gain would be $K = fk^T$.

There are other approaches as well for selecting $K$ for multi-input systems.

1
On

To put it simply, a system is controllable means that every single state $x \in \mathbb{R}^n$ can be adjusted through some input $u$. This is precisely what it means.

It is always possible, because for each state you have an input, and you can pick the input arbitrarily.

For example, given some arbitrary system $\dot x = Ax +Bu$. If we expand out the system, it may look like:

$$\dot x_1 = x_1 + u_1$$ $$\dot x_2 = x_2 + u_2$$

Here $$A =\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} $$ which is obviously not stable

We have input for each of the state equation, therefore we can easily pick $$u_1 = -2x_1$$$$u_2 = -2x_2$$ in other words $$u = \begin{bmatrix} -2 & 0 \\ 0 & -2 \end{bmatrix} x = Kx$$

So we have

$$\dot x_1 = -x_1$$ $$\dot x_2 = -x_2$$

Which implies

$$x_1(t) = \exp(-t)x_1(0)$$ $$x_2(t) = \exp(-t)x_2(0)$$

Which make the system go to zero as $t \to \infty$


Note that controllability are not common in real systems. More common are systems that are stabilizable: $$\dot x_1 = x_1 + u_1$$ $$\dot x_2 = -x_2$$

Here, we cannot adjust the state $x_2$ using any inputs. But thank goodness it is going to zero all by itself.


Finally, systems that are not controllable $$\dot x_1 = x_1 + u_1$$ $$\dot x_2 = x_2$$

Here we cannot ever hope to drive the state $x_2$ to zero. It will always blow up. End of the story.

0
On

This is a standard result in control theory i.e. $(A,B)$ is controllable iff there exists $K$ such that the eigenvalues of $A-BK$ can be arbitrarily placed (within typical limitations for complex eigenvalues). In most textbooks a proof is given for the single input case ($B$ is a vector). The proof is constructive and is based on the transformation into the so called canonical controllable form. You can check the details in all textbooks on advanced linear control theory, see for example Antsaklis and Michel, "Linear Systems" .