How to show that $\dot z=(M^{-1}AM)z\implies \dot z=\begin{bmatrix}a &-b\\b & a\end{bmatrix}z$?

105 Views Asked by At

Consider the linear system $$\dot x=A x,$$ with complex eigenvalues. Let $z=M^{-1}x$, where $M=[v_R\; v_I]$ with $v_R=1/2(v_1+v_2)$ and $v_I=i/2(v_1-v_2)$. Both $v_1,v_2$ represent the eigenvectors of $A$.

We want to show that the new state space representation of the system is $$\dot z=(M^{-1}AM)z,$$ or \begin{equation}\begin{bmatrix}\dot z_1\\\dot z_2\end{bmatrix}=\begin{bmatrix}a & -b\\b & a\end{bmatrix}\begin{bmatrix}z_1 \\ z_2\end{bmatrix},\end{equation}

where $a,b$ are real and imaginary parts of the eigenvalues of $A$.

I am not asking for the proof. I just don't understand the structure of $M$. At first I thought that $M$ was $1\times 2$ but then it must be invertible. How can one do the matrix multiplication between $A$ and $M$?

I'd appreciate any hints.

2

There are 2 best solutions below

2
On BEST ANSWER

With

$A \in M(2, \Bbb R)$ and

$A v = \mu v, \tag 1$

where

$\mu \in \Bbb C \setminus \Bbb R, \tag 2$

the vector $v$ must also be complex; thus we have

$A\bar v = \bar \mu \bar v, \tag 3$

here

$\mu \ne \bar \mu \tag 4$

are the two complex eigenvalues of $A$, and $v$, $\bar v$ are thus the linearly independent eigenvectors. If we set

$\mu = \sigma + i \omega, \; \sigma, \omega \in \Bbb R, \tag 5$

and

$v = v_R + iv_I, \; v_R, v_I \in \Bbb R^2, \tag 6$

then (1) becomes

$A(v_R + iv_I) = (\sigma + i\omega)(v_R + iv_I); \tag 7$

we write this out in terms of real and imaginary parts:

$Av_R + iAv_I = \sigma v_R - \omega v_I + i(\sigma v_I + \omega v_R), \tag 8$

whence

$Av_R = \sigma v_R - \omega v_I, \tag 9$

$Av_I = \sigma v_I + \omega v_R; \tag{10}$

if we set

$M = [v_R \; v_I], \tag{11}$

that is, the columns of $M$ are $v_R$ and $v_I$, then

$AM = [Av_R \; Av_I] = [\sigma v_R - \omega v_I \; \sigma v_I + \omega v_R] = \sigma[v_R \; v_I] + \omega[-v_I \; v_R]; \tag{12}$

setting

$J = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}; \tag{13}$

we see that, for any

$B = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \tag{14}$

we have

$BJ = \begin{bmatrix} a & b \\ c & d \end{bmatrix}\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} = \begin{bmatrix} -b & a \\ -d & c \end{bmatrix}; \tag{15}$

thus

$[v_R \; v_I] J = [-v_I \; v_R], \tag{16}$

so that (12) becomes

$AM = [\sigma v_R - \omega v_I \; \sigma v_I + \omega v_R] = \sigma[v_R \; v_I] + \omega[v_R \; v_I]J = [v_R \; v_I](\sigma I + \omega J); \tag{17}$

finally we recall that by definition

$M^{-1}[v_R \; v_I] = I, \tag{18}$

whence

$M^{-1}AM = M^{-1}[v_R \; v_I] (\sigma I + \omega J) = I(\sigma I + \omega J) = \sigma I + \omega J = \begin{bmatrix} \sigma & \omega \\ -\omega & \sigma \end{bmatrix}. \tag{19}$

Now with (19) in hand we can see that the original differential equation

$\dot x = Ax \tag{20}$

becomes, under the transformation

$z = M^{-1}x \tag{21}$

$\dot z = M^{-1} \dot x = M^{-1}Ax = M^{-1}AM M^{-1}x = M^{-1}AM z = (\sigma I + \omega J)z, \tag{22}$

the solution of which is readily seen to be

$z(t) = \exp((\sigma I + \omega J)(t - t_0))z(t_0) = \exp(\sigma I(t - t_0)) \exp(\omega J(t - t_0))z(t_0). \tag{23}$

We have

$\exp(\sigma I(t - t_0)) = e^{\sigma(t - t_0)}I, \tag{24}$

and

$\exp(\omega J(t - t_0)) = (\cos \omega(t - t_0)) I + (\sin \omega(t - t_0)) J, \tag{25}$

that is, $\exp(\omega J(t - t_0))$ satisfies the Euler identity familiar from complex analysis:

$e^{i\omega(t - t_0)} = \cos \omega(t - t_0) + i \sin \omega(t - t_0), \tag{26}$

which is readily derived from the power series for $e^{i\omega(t - t_0)}$, $\cos \omega(t - t_0)$ and $\sin \omega (t - t_0)$; in the case of (25) we use

$J^2 = -I, \; J^3 = -J, \; J^4 = I \tag{27}$

in place of

$i^2 = -1, \; i^3 = -i, \; i^4 = 1, \tag{28}$

but other than that, the algebra involved in establishing (25) is more or less the same as that necessary for (26); thus we may exploit (25) and write

$\exp(\omega J(t - t_0)) = (\cos \omega(t - t_0)) I + (\sin \omega(t - t_0)) J = \begin{bmatrix} \cos \omega(t - t_0) & \sin \omega(t - t_0) \\ -\sin \omega(t - t_0) & \cos \omega(t - t_0) \end{bmatrix}, \tag{30}$

and so from (23)-(25) it follows that

$z(t) = e^{\sigma(t - t_0)}\begin{bmatrix} \cos \omega(t - t_0) & \sin \omega(t - t_0) \\ -\sin \omega(t - t_0) & \cos \omega(t - t_0) \end{bmatrix} z(t_0). \tag{31}$

2
On

Hint: $M$ is square ($2×2$), as is necessary for it to be invertible. This is because its two columns are $2×1$ vectors.

The eigenvalues will be conjugates of each other: $\lambda_1=a+bi$ and $\lambda_2=a-bi$.

Using the equations $Av_i=\lambda_i v_i$, we get

$Av_R=av_R+bv_I$ and $Av_I=-bv_R+av_I$.

But, recall that $M^{-1}AM$ is $A$ expressed in terms of the basis $\{v_R,v_I\}$.

The result follows.