General solution of $x^{\prime}(t)=A x(t)$ [repeated real/complex eigenvalues case]

335 Views Asked by At

Could anybody give me the general solution of autonomous linear systems $X^{\prime}(t)=A X(t)$ in the case where there are repeated real eigenvalues and some complex repeated complex eigenvalues?

The case where they are all distinct is done by the following theorem but in the case where the eigenvalues are repeated, I didn't find something that could make me satisfied.

Theorem: Consider the system $X^{\prime}=A X$ where $A$ has distinct eigenvalues $\lambda_{1}, \ldots, \lambda_{k_{1}} \in \mathbb{R}$ and $\alpha_{1}+i \beta_{1}, \ldots, \alpha_{k_{2}}+i \beta_{k_{2}} \in \mathbb{C} .$ Let $T$ be the matrix that puts A in the canonical form $T^{-1} A T=\left(\begin{array}{cccccc}{\lambda_{1}} \\ {} & {\ddots} & {} \\ {} & {} & {\lambda_{k_{1}}} & {} \\ {} & {} & {} & {B_{1}} \\ {} & {} & {} & {} & {\ddots} \\ {} & {} & {} & {} & {} & {B_{k_{2}}}\end{array}\right)$

where $$ B_{j}=\left(\begin{array}{cc}{\alpha_{j}} & {\beta_{j}} \\ {-\beta_{j}} & {\alpha_{j}}\end{array}\right) $$ Then the general solution of $X^{\prime}=A X$ is TY (t) where

$Y(t)=\left(\begin{array}{c}{c_{1} e^{\lambda_{1} t}} \\ {\vdots} \\ {\vdots} \\ {a_{1} e^{\alpha_{1} t} \cos \beta_{1} t+b_{1} e^{\alpha_{1} t} \sin \beta_{1} t} \\ {-a_{1} e^{\alpha_{1} t} \sin \beta_{1} t+b_{1} e^{\alpha_{1} t} \cos \beta_{1} t} \\ {\vdots} \\ {a_{k_{2}} e^{\alpha_{k_{2}} t} \cos \beta_{k_{2}} t+b_{k_{2}} e^{\alpha_{k_{2}} t} \sin \beta_{k_{2}} t} \\ {-a_{k_{2}} e^{\alpha_{k_{2}} t} \sin \beta_{k_{2}} t+b_{k_{2}} e^{\alpha_{k_{2}} t} \cos \beta_{k_{2}} t}\end{array}\right)$

Thanks for your help

2

There are 2 best solutions below

15
On BEST ANSWER

For general square matrix A there is Jordan form J so that

$A = PJP^{-1}$

Note that

$A^n = PJP^{-1}PJP^{-1}.....PJP^{-1} = PJ^nP^{-1}$

Therefore, following from Taylor series expansion (which containes different powers of A):

$e^{At} = Pe^{Jt}P^{-1}$.

Now in general

$J = \begin{bmatrix} J_1 && 0 && \cdots && 0 \\ 0 && J_2 && \cdots && 0 \\ \vdots && \vdots && \ddots && \vdots \\ 0 && 0 && \cdots && J_N\end{bmatrix} $

and

$J^n = \begin{bmatrix} J_1^n && 0 && \cdots && 0 \\ 0 && J_2^n && \cdots && 0 \\ \vdots && \vdots && \ddots && \vdots \\ 0 && 0 && \cdots && J_N^n\end{bmatrix} $

Therefore, again following from Taylor series expansion:

$e^{Jt} = \begin{bmatrix} e^{J_1t} && 0 && \cdots && 0 \\ 0 && e^{J_2t} && \cdots && 0 \\ \vdots && \vdots && \ddots && \vdots \\ 0 && 0 && \cdots && e^{J_Nt}\end{bmatrix} $

So if we can find expression for $e^{J_at}$ we are done as:

$e^{At} = Pe^{Jt}P^{-1} = P\begin{bmatrix} e^{J_1t} && 0 && \cdots && 0 \\ 0 && e^{J_2t} && \cdots && 0 \\ \vdots && \vdots && \ddots && \vdots \\ 0 && 0 && \cdots && e^{J_Nt}\end{bmatrix} P^{-1}$.

To find expression for $e^{J_at}$ we open up $e^{xt}$ as Taylor series expansion around the corresponding eigenvalue $\lambda_a$:

$e^{xt} = \sum \frac{t^ne^{\lambda_at}}{n!}(x-\lambda_a)^n$

and plug in $J_a$ in the expression

$e^{J_at} = \sum \frac{t^ne^{\lambda_at}}{n!}(J_a-\lambda_aI)^n$.

Here $(J_a-\lambda_aI)$ is a nilpotent matrix, only one above the main diagonal entries are one:

$J_a-\lambda_aI = \begin{bmatrix} 0 && 1 && 0 && \cdots && 0 \\ 0 && 0 && 1 && \cdots && 0 \\ \vdots && \vdots && \ddots && \ddots && \vdots\\ 0 && 0 && 0 && 0 && 1 \\ 0 && 0 && 0 && 0 && 0 \end{bmatrix}$

$(J_a-\lambda_aI)^2 = \begin{bmatrix} 0 && 0 && 1 && \cdots && 0 \\ 0 && 0 && 0 && \cdots && 0 \\ \vdots && \vdots && \ddots && \ddots && \vdots\\ 0 && 0 && 0 && 0 && 1 \\ 0 && 0 && 0 && 0 && 0 \\ 0 && 0 && 0 && 0 && 0 \end{bmatrix}$

$(J_a-\lambda_aI)^{m-1} = \begin{bmatrix} 0 && 0 && 0 && \cdots && 1 \\ 0 && 0 && 0 && \cdots && 0 \\ \vdots && \vdots && \ddots && \ddots && \vdots\\ 0 && 0 && 0 && 0 && 0 \\ 0 && 0 && 0 && 0 && 0 \\ 0 && 0 && 0 && 0 && 0 \end{bmatrix}$

where m is the dimension of the corresponding Jordan block $J_a$, and obviously $(J_a-\lambda_aI)^m$ and all the higher powers are zero. So we have just m terms in the taylor series which are nonzero and noting that the powers of $(J_a-\lambda_aI)$ follow a very nice pattern (ones shifts up at each power and eventually vanishes at power m), we have:

$e^{J_at} = \sum \frac{t^ne^{\lambda_at}}{n!}(J_a-\lambda_aI)^n = \begin{bmatrix} e^{\lambda_at} && te^{\lambda_at} && \frac{t^2}{2}e^{\lambda_at} && \cdots && \frac{t^{(m-1)}}{(m-1)!}e^{\lambda_at} \\ 0 && e^{\lambda_at} && te^{\lambda_at} && \cdots && \vdots \\ \vdots && \vdots && \ddots && \ddots && \vdots\\ 0 && 0 && 0 && e^{\lambda_at} && te^{\lambda_at} \\ 0 && 0 && 0 && 0 && e^{\lambda_at} \end{bmatrix}$.

  • EDIT:

This may not be very useful if you want to put it into real form with sines and cosines. There is another approach which I think is more useful for that purpose.

For initial condition response of the system to $x_0$ you can write $x_0$ as a linear combination of normal and generalized eigenvectors of A as they span whole $R^n$.

$x_0 = P \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n\end{bmatrix}$

Then, you can write your solution as superposition of these individual responses. These base vectors are actually the columns of your P matrix for putting A into Jordan canonical form. Then for individual responses to these components, if it is a normal eigenvector you have:

$e^{At}(c_iv_i) = e^{\lambda_it}(c_iv_i)$.

For generalized eigenvectors you have:

$ e^{At}c_if_i = e^{\lambda_i}e^{(A-\lambda_iI)t}c_if_i= c_ie^{\lambda_it}\sum_{k = 0}^{\infty}\frac{{t^k(A-\lambda_iI)^k}}{k!}f_i $

If $f_i$ is $r^{th}$ order generalized eigenvector, then all the powers starting from $r$ is zero and we have

$ e^{At}c_if_i = c_ie^{\lambda_it}\sum_{k = 0}^{r-1}\frac{{t^k(A-\lambda_iI)^k}}{k!}f_i = c_ie^{\lambda_it}\sum_{k = 0}^{r-1}\frac{{t^k}}{k!}f_i^{r-k} $

where $f_i^{r-k}$ denotes the $(r-k)^{th}$ order generalized eigenvector in the corresponding chain (as when you multiply your generalized eigenvector with $(A-\lambda_iI)$ each time its order decreases by one until you get normal eigenvector and higher powers are zero). However, if your eigenvector chains for certain eigenvalue are complex you must have also conjugate vectors for the correspondong conjugate eigenvalue and hence your coefficients will also be complex conjugate pairs for writing your $x_0$ as linear combination of eigenvectors. Therefore you have for conjugate pair of eigenvectors conjugate solutions:

$ e^{At}c_i^*f_i^* = c_i^*e^{\lambda_i^*t}\sum_{k = 0}^{r-1}\frac{{t^k(A-\lambda_i^*I)^k}}{k!}f_i^* = c_i^*e^{\lambda_i^*t}\sum_{k = 0}^{r-1}\frac{{t^k}}{k!}(f_i^{r-k})^* $

Note that for complex eigenvalues and eigenvectors corresponding coefficients $c_i$'s in the linear combination are also complex. From this point you may do simplifications and get to the sine/cosine representation of the system response.

0
On

Thank you again for your interisting addition, from my part I found the following result:

if $A \in \mathbb{M}_{n}(\mathbb{R})$ the solutions of $x^{\prime}(t)=A x(t)$ in $\mathbb{R}^{n}$ has the form: $$ x(t)=\sum_{1 \leq j \leq q} e^{t \alpha_{j}}\left(\sum_{0 \leq k \leq m_{j}-1} t^{k}\left(\cos \left(\beta_{j} t\right) a_{j, k}+\sin \left(\beta_{j} t\right) b_{j, k}\right)\right) $$ Where $\alpha_{j}=\operatorname{\Re}\left(\lambda_{j}\right), \beta_{j}=\Im m\left(\lambda_{j}\right)$ and the vectors $a_{j, k}, b_{j, k}$ are in $E_{j}$ (caracteristic space $\mathbb{R}^{n}=E_{1} \oplus \cdots \oplus E_{q}$)