Reference Request: Algorithm for Eigenvalues Placement for MIMO LTI Dynamical Systems

207 Views Asked by At

Let $n,m \in \mathbb{N}$, $n,m \geq 1$, $X \in \mathbb{R}^{n \times 1}$, $A\in \mathbb{R}^{n \times n}$, $B\in \mathbb{R}^{n \times m}$ and $U \in \mathbb{R}^{m \times 1}$. Then for the following LTI system $$ \dot{X} = A\cdot X + B\cdot U$$ is known to exist $K \in \mathbb{R}^{n \times m}$ such that $$\hat{A} = A - B\cdot K^T$$ has any desired eigenvalues $\iff$ $A$ does not have any left eigenvector orthogonal on all the columns of $B$.

My question is if there are any algorithms to find $K$, not requiring more that this existence criterion and not imposing any conditions on the desired set of eigenvalues?

For instance the algorithm behind "place" (Kautsky) in Matlab, cannot assign an eigenvalue with multiplicity greater then the rank of $B$.

Thank you!

1

There are 1 best solutions below

0
On BEST ANSWER

Your posed existence criterion boils down to that the pair $(A,B)$ must be controllable. Namely your second description basically requires the same as the Hautus lemma.

My answer will be based on an answer of mine to this related question. This is my own approach onto the problem, so it might not match any known algorithm exactly, but should give a reasonable image of the steps involved in solving this. Here is also a paper going into more detail of how solve the problem in a more robust and numerically stable way. That paper also contains a few references to other algorithms.

For convenience I will just define the full state feedback gain as $K\in\mathbb{R}^{m \times n}$ (so no need for the transpose). This means that you are interested in finding a $K$ such that

$$ A - B\,K = \hat{A} = V\,\Lambda\,V^{-1}, \tag{1} $$

with $\Lambda,V \in \mathbb{R}^{n \times n}$. The right hand side of $(1)$ is the Jordan decomposition of $\hat{A}$, such that the columns of $V$ contain the generalized eigenvectors of $\hat{A}$ and $\Lambda$ is a Jordan matrix. In order to find a $K$ such that the eigenvalues of $\hat{A}$ match a given set of eigenvalues, including the information about the geometric multiplicity (so $\Lambda$), then you also have to solve for $V$ at the same time. In order to do this $(1)$ can be rewritten as

$$ A\,V - B\,\Omega = V\,\Lambda, \tag{2} $$

with $\Omega = K\,V$, which also lies in $\mathbb{R}^{m \times n}$. This equation is linear in all unknowns which will make it easier to solve later on. Once $(2)$ is solved for $V$ and $\Omega$, then the full state feedback gain can be found using $K = \Omega\,V^{-1}$. When using the structure of $\Lambda$ it is possible to decouple the dependencies of the columns of $V$ and $\Omega$ to only the columns associated with each Jordan block of $\Lambda$. Namely at the start of each Jordan block you have to solve

$$ A\,V_{\bullet,i} - B\,\Omega_{\bullet,i} = \Lambda_{i,i}\,V_{\bullet,i}, \tag{3} $$

with $i$ the first index associated with a Jordan block and $X_{\bullet,i}$ denotes the $i$th column of matrix $X$. In order to obtain a nonzero solution an additional constraint has to be added, such as $\|V_{\bullet,i}\| = 1$. Each next equation associated with that Jordan block is

$$ A\,V_{\bullet,i+k} - B\,\Omega_{\bullet,i+k} = V_{\bullet,i+k-1} + \Lambda_{i+k,i+k}\,V_{\bullet,i+k}, \tag{4} $$

with $k$ from one till the size of the Jordan block. It can be noted that due to the definition of a Jordan matrix then it must be true that $\Lambda_{i+k,i+k} = \Lambda_{i,i}$ for all specified $k$. These equations do not have have a trivial zero solution. However both $(3)$ and $(4)$ can have more unknowns then equations, so some extra constraints or minimization criteria would have to be used. I believe many algorithms differ by choosing different criteria.

It can be noted that the place command of MATLAB also constraints $\Lambda$ to only a Jordan matrix with Jordan blocks all of size one (so exactly diagonal). This implies that it is not possible to assign an eigenvalue with multiplicity greater then the rank of $B$, since the eigenvectors should all be linearly independent of each other, otherwise $V^{-1}$ would not exist. In order to see why this is the case $(3)$ can be rewritten as

$$ (A - \Lambda_{i,i}\,I) V_{\bullet,i} = B\,\Omega_{\bullet,i}. \tag{5} $$

The right hand side of $(5)$ is just a linear combination of the columns of $B$, so the left hand side should be as well. The number of linearly independent vectors that can be constructed by the right hand side is by definition equal to the rank of $B$. Either $\Lambda_{i,i}$ is not an eigenvalue of $A$, in which case any set of linearly independent $V_{\bullet,i}$ should be transformed by $A - \Lambda_{i,i}\,I$ into another linearly independent set of vectors and vice versa. Or $\Lambda_{i,i}$ is an eigenvalue of $A$, in which case eigenvectors of $A$ corresponding to that eigenvalue will be solutions for $V_{\bullet,i}$ (and $\Omega_{\bullet,i}$ in the null space of $B$). However the number of other solutions will decrease by the same amount because of the properties of the Hautus lemma. So when $A - \Lambda_{i,i}\,I$ loses rank, then any linear combination of its columns also losses the ability to form a column of $B$. Therefore the geometric multiplicity of an eigenvalue can never be greater then the rank of $B$. But this also implies that the number of Jordan blocks associated to the same eigenvalue can also never be greater then the rank of $B$.

If $\Lambda_{i,i}$ is not an eigenvalue of $A$ then one way of reducing the decrease of freedom would be to solve $(3)$ and $(4)$ for the eigenvectors for a given $\Omega$ by rewriting them as

$$ V_{\bullet,i} = (A - \Lambda_{i,i}\,I)^{-1} B\,\Omega_{\bullet,i}, \tag{6} $$

$$ V_{\bullet,i+k} = (A - \Lambda_{i,i}\,I)^{-1} (B\,\Omega_{\bullet,i+k} + V_{\bullet,i+k-1}). \tag{7} $$

This should yield a solution as long as $\Omega_{\bullet,i}$ does not lie in the null space of $B$ and its columns associated to $(6)$ using the same eigenvalue are linearly independent.


For example given the following system

$$ A = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad B = \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 0\\ 0 & 1 \end{bmatrix}. \tag{8} $$

This system basically are two decoupled double integrators, which is controllable. All eigenvalues should be placed at -1. The rank of $B$ is two, so in order to do this there are three choices for the Jordan block sizes, namely 2 and 2, 3 and 1, or only 4. For the first choice the associated $\Lambda$ is given by

$$ \Lambda = \begin{bmatrix} -1 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & -1 \end{bmatrix}. $$

When solving all $(6)$ and $(7)$ it actually does not really matter what is picked for $\Omega$. As long as it gives a nonsingular $V$ then the feedback gain will always be

$$ K = \begin{bmatrix} 1 & 2 & 0 & 0 \\ 0 & 0 & 1 & 2 \end{bmatrix}. $$

It can be noted that this is actually also a decoupled controller. For the second choice the associated $\Lambda$ is given by

$$ \Lambda = \begin{bmatrix} -1 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix}. $$

When solving all $(6)$ and $(7)$ it does matter what is picked for $\Omega$. I wrote some quick program in MATLAB which, after normalizing $V$, tries to minimize $\|V^\top V - I\|_F$ (a measure of how orthogonal the eigenvectors are). Here $\|X\|_F$ denotes the Frobenius norm of $X$. This does seem to have local minima, because it does not always converge to the same solution. One of the solutions gives the following feedback gain

$$ K = \begin{bmatrix} -0.3628 & 0.6372 & -0.6724 & -0.6724 \\ 2.7621 & 2.7621 & 2.3628 & 3.3628 \end{bmatrix}. $$

For the third choice the associated $\Lambda$ is given by

$$ \Lambda = \begin{bmatrix} -1 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & -1 \end{bmatrix}. $$

When solving all $(6)$ and $(7)$ it does again matter what is picked for $\Omega$. When using the same optimization criteria as the previous choice for $\Lambda$, then again there seem to be multiple local minima. One of the solutions gives a feedback gain very similar to the gain of from the first choice

$$ K = \begin{bmatrix} 1.0001 & 2.0000 & 0.0005 & 0.0000 \\ -0.0000 & 0.0000 & 0.9999 & 2.0000 \end{bmatrix}. $$

This is not the best algorithm, but should give you and indication of what the first steps in general would be and what possibly could be used as criteria in order to optimize your solution with respect to some cost function.