How can I find the control for a finite system by definition?

66 Views Asked by At

Currently I am working on control theory, precisely in controllability but still on the basics, in the following example by E. Zuazua:

It says, consider the following problem

\begin{equation} \begin{pmatrix} \dot{x_1} \\ \dot{x_2} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \end{pmatrix} u(t) \end{equation}

and given any initial $(x_1,x_2)=(x_{0}^{1}, x_{0}^{2})$ and final state $(x_1,x_2)=(x_{1}^{1}, x_{1}^{2})$. It is easy to find a regular trjectory $z$ such that \begin{equation} (1) \begin{array}{cc} z(0)=x_{0}^{1} & z(T)=x_{1}^{1}\\ z'(0)=x_{0}^{1} & z'(T)=x_{1}^{1} \end{array} \end{equation}

In fact, there are infinitely many ways of constructing such functions, for instance taking $z$ to be a cubic polynomial. Define $u=z''+z$ since the solution coincides, i.e. $x=z$ and satisfies the control requirements (1).

Now up to here, I don't really see how can be proven that there are infinitely many and even more how to find such functions. I've tried taking as it says $u=t^3$ and using mathematica to solve it with initial conditions $x(0)=0,x(2)=3 $ but it doesn't have any solution.

Now the solutionn using variation of parameters is

\begin{equation*} \left\{ \begin{array}{ll} \displaystyle x_1(t) = C_2 \cos(t) - C_1 \sin(t) + \int_{[0,t]} \cos(t-s)u(s)ds \\ x_2(t)=C_2 \cos(t) - C_1 \sin(t) + \int_{[0,t]} \cos(t-s)u(s)ds \\ \end{array} \right. \end{equation*}

The questions are: How can $u=z''+z$ and find such trajectory? and is there any other? , If so how to find suitable final and initial states to do so?

PS. I know that such function exists because the system is controllable by Kalman condition, and that I can find the control using the adjoint system but I'd like to be able see such definition satisfied.

Thanks so much in advance.

1

There are 1 best solutions below

4
On BEST ANSWER

For a linear time-invariant system $$\dot{x}=Ax+Bu,\text{ with } x(t_0)=x_0$$ we have the general solution

$$x(t)=\exp(A(t-t_0))x_0+\int_{t_0}^t\exp(A(\tau-t_0))Bu(\tau)d\tau.$$

We will split the time into multiple time segments pieces

  • $t_0\leq t < t_1$ with constant control input $u_0$
  • $t_1 \leq t < t_2$ with constant control input $u_1$
  • ...
  • $t_{n-1} \leq t < t_n$ with constant control input $u_{n-1}$

Then we can rewrite the general solution as

$$x(t_n)-\exp(A(t_n-t_0))x_0=\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}\exp(A(\tau-t_i))d\tau \,Bu_i.$$

We introduce $$Q_i=\int_{t_i}^{t_{i+1}}\exp(A(\tau-t_i))d\tau\, B$$

to simplify the previous expression. $$x(t_n)-\exp(A(t_n-t_0))x_0=\sum_{i=0}^{n-1}Q_iu_i$$

If the system is controllable we will be able to solve for control input vector $u_i$.


Relationship to controllability matrix. We have

$$Q_i=\int_{t_i}^{t_{i+1}}\exp(A(\tau-t_i))d\tau\, B.$$

Let us further investigate this expression by applying the definition of the exponential matrix:

$$ \exp(A(\tau-t_i))B =\left[I + \dfrac{1}{1!}A(\tau-t_i)+\dfrac{1}{2!}A^2(\tau-t_i)^2+...+\dfrac{1}{(n-1)!}A^{n-1}(\tau-t_i)^{n-1}+...\right]B.$$

After integrating this expression from $t_i$ to $t_{i+1}$ we will obtain

$$Q_i =\left[I(t_{i+1}-t_{i}) + \dfrac{1}{2!}A(t_{i+1}-t_{i})^2+\dfrac{1}{3!}A^2(t_{i+1}-t_{i})^3+...+\dfrac{1}{n!}A^{n-1}(t_{i+1}-t_{i})^n+...\right]B.$$

By the Cayley-Hamilton theorem we know that all powers of $A$ higher than $n-1$ can be represented as a linear combination of lower powers of $A$. Or to put it in a more mathematical formulation. For $m \geq n$ we can always find $\alpha_0$, $\alpha_1$, ..., $\alpha_{n-1}$ such that

$$A^m=\alpha_0I+\alpha_1A+...+\alpha_{n-1}A^{n-1}.$$

Hence, we can truncate the sum in the previous expression by collecting all terms with $I$, $A$, $A^2$,..., $A^{n-1}$ and introduce new coefficients $\beta_0$ for the $I$ term, $\beta_1$ for the $A$ term and so forth. We will obtain

$$Q_i = \left[\beta_0I+\beta_1A+...+\beta_{n-1}A^{n-1}\right]B$$ $$= \begin{bmatrix}B & AB & \cdots &A^{n-1}B\end{bmatrix} \begin{bmatrix}\beta_0 \\ \beta_1 \\ \vdots \\ \beta_{n-1} \end{bmatrix}. $$

As you can see we naturally obtain the Controllability matrix in $Q_i$.


The infinitude of trajectories.

If we assume that $(A, B)$ is controllable and we shift the system such that, the origin is equivalent to the final destination. Then we can use a full-state feedback control law $u=-Kx$ to obtain the asymptotically stable system

$$\dot{x}=[A-BK]x.$$

We only have to choose $K$ in such a way that all the eigenvalues have a strictly negative real part. Then it is clear that for every initial condition $x(t=t_0)=x_0$ (including the shifted initial condition) the state vector will converge to $x \to 0$ (note that we shifted $x$ such that the final position is in the origin.). As we have infinitely many possibilities for the choice of $K$ we will have infinitely many trajectories that start in the initial condition and end at the origin (which is our final position).


I think I now understand the method that the authors are describing. But I will put it into a more general formulation. Assume a scalar differential equation of the following form

$$f(x',x'',...,x^{(n)}) + ax=u,$$

in which $a\neq 0$. Then introduce the control input

$$u = f(x',x'',...,x^{(n)})+a\bar{u}.$$

Then we obtain.

$$x=\bar{u}$$

By choosing $\bar{u}(t=t_0)=x_0$ and $\bar{u}(t=t_\text{final})=x_1$ we can see that any trajectory from $x_0$ to $x_1$ can be dictated by an appropriate $u$. This method is called dynamic inversion. The problem with this method is that we need a very precise model. Additionally, we also have to use large control inputs (not energy efficient + amplification of noise). Another problem about this method is that we must be able to measure $x',x'',...,x^{(n)}$ in order to chancel the nonlinear dynamics.