Differential equation: find fundamental set of solutions

781 Views Asked by At

We have $y' = Ay$ with $A = \begin{pmatrix} 2 & 1 \\ 0 & 1 \end{pmatrix} $

First I found out the eigenvalues which are $e_1 =2$ and $e_2 = 1$. Both of them are $α = 1$ which means they both have the algebraic multiplicity $1$.
Then I found out the geometric multiplicity. For both eigenvalues it is $γ =1$.

Now I got troubles to set up the fundamental set of solution.
I tried to use the following rule:

Every funtion(=column of the fundamental matrix) of the fundametal set of solutions is a linear combination of the following functions: $x^ve^{e_jx}$ with $1 \leq j \le 2$, $0 \le v \le α_j-γ_j$

The theorem works when I have only one eigenvalue but this time I have two different eigenvalues and it is not working. I dont see where my mistake is.

I get:

$$φ = \begin{pmatrix} a_1e^{2x} \\ a_2e^x \end{pmatrix} $$

afterwards I would usually do a compare of coefficients of $Aφ$ and $φ'$ to find $a_1$ and $a_2$ but the $φ$ seems to be wrong.

Alternatively I could solve this differential equation with "Jordan-Matrix way" but I want to know where my mistake is.

3

There are 3 best solutions below

2
On BEST ANSWER

As you noted, the matrix $\begin{pmatrix} 2&1\\0&1 \end{pmatrix}$ is diagonalizable with eigenvalues $1$ and $2$ respectively. The corresponding eigenvectors are $v_1=\begin{pmatrix}1\\-1\end{pmatrix}$ and $v_2=\begin{pmatrix} 1\\0 \end{pmatrix}$.

The general solution of this system of differential equations is $$ae^{x}v_1+be^{2x}v_2=\begin{pmatrix}ae^x+be^{2x}\\-ae^x\end{pmatrix}.$$

Let's check whether this is correct: $$\begin{pmatrix}ae^x+be^{2x}\\-ae^x\end{pmatrix}'=\begin{pmatrix}ae^x+2be^{2x}\\-ae^x\end{pmatrix}=\begin{pmatrix}2&1\\0&1\end{pmatrix}\begin{pmatrix}ae^x+be^{2x}\\-ae^x\end{pmatrix}.$$ Yeah, this seems to work :)

In general, if the matrix is diagonalizable, the solution is given by $$\sum_{i}a_ie^{\lambda_ix}v_i,$$ where $a_i$ are constants, $\lambda_i$ eigenvectors and $v_i$ the corresponding eigenvectors. Whenever the matrix is not diagonalizable, the deficiency is measured by powers of $x$ in front of $e^{\lambda_i}$ and the eigenvectors $v_i$ are replaced with generalized eigenvectors. (This exactly what the Jordan canonical way yields).

Let's go out of our way and "prove" the above statement for diagonalizable matrices. Recall that is $A$ is diagonalizable, then $A=P^{-1}DP$ where $D$ is the diagonal matrix with the eigenvalues on the diagonal and $P$ is the matrix whose $i$-th column is the eigenvector corresponding to the $i$-th eigenvalue.

Ignoring the fact that we are dealing with a system of equations, we would write \begin{eqnarray} \frac{\mathrm{d}y}{\mathrm{d}x} &=& Ay\\ y &=& e^{Ax}C. \end{eqnarray} But what is $e^{Ax}$? Well, $Ax$ is a matrix and $e^{B}=\sum_{k\geq 0}\frac{B^k}{k!}$. Thus \begin{eqnarray} e^{Ax} &=& \sum_{k\geq 0} \frac{(Ax)^k}{k!}\\ &=& \sum_{k\geq 0} \frac{(P^{-1}DPx)^k}{k!}\\ &=& \sum_{k\geq 0} P^{-1}\frac{(Dx)^k}{k!}P\\ &=& P^{-1}\begin{pmatrix} \sum_k \frac{(\lambda_1x)^k}{k!}& 0& \dots \\ 0&\sum_k \frac{(\lambda_2x)^k}{k!}& \dots \\ \vdots & \ddots& \ddots \end{pmatrix}P\\ &=& P^{-1}\begin{pmatrix} e^{\lambda_1x}& 0& \dots \\0&e^{\lambda_2x}& \dots \\ \vdots & \ddots& \ddots \end{pmatrix}P. \end{eqnarray} Normally $C$ would be a constant, now $C$ is simply a column matrix of constants. You can easily check that $e^{Ax}C$ is indeed given by the formula $$\sum_{i}a_ie^{\lambda_ix}v_i.$$ This gives you a heuristic proof of the statement.

0
On

For the matrix $$A = \begin{pmatrix} 2 & 1 \\ 0 & 1 \end{pmatrix}$$

you have found $\lambda _1 =2$ and $\lambda _2 =1$

The corresponding eigenvectors are $ v_1 =\begin{pmatrix} 1\\0 \end{pmatrix}$ and $ v_2 =\begin{pmatrix} -1\\1 \end{pmatrix} $

The general solution is therefor,$$c_1 e^{\lambda _1 x}v_1 + c_2 e^{\lambda _2 x}v_2=$$

$$c_1 e^{2x} \begin{pmatrix} 1\\0 \end{pmatrix} + c_2 e^{x} \begin{pmatrix} -1\\1\end{pmatrix} . $$

0
On

You don’t really need to find the eigenvectors explicitly in order to construct a solution to this differential equation. If a $2\times2$ real matrix $A$ has distinct real eigenvalues $\lambda_1$ and $\lambda_2$, then it can be decomposed into the form $\lambda_1P_1+\lambda_2P_2$, where $$P_1 = {A-\lambda_2I \over \lambda_1-\lambda_2} \\ P_2 = {A-\lambda_1I \over \lambda_2-\lambda_1}.$$ These are projections onto the two eigenspaces with the handy property that $P_1P_2=P_2P_1=0$. (In case you do need eigenvectors, the nonzero columns of $P_1$ and $P_2$ are all eigenvectors of the respective eigenvalues.) This annihilation property means (why?) that $e^{xA} = e^{\lambda_1x}P_1+e^{\lambda_2x}P_2$, and the general solution to the differential equation is thus $\left(e^{\lambda_1x}P_1+e^{\lambda_2x}P_2\right)\mathbf c$. Since the elements of the constant vector $\mathbf c$ are arbitrary, after you’ve expanded this expression you can combine them into a different pair of arbitrary constants to get an expression of the form $c_1e^{\lambda_1x}\mathbf v_1+c_2e^{\lambda_2x}\mathbf v_2$.