Is the diagonalization of a matrix $D$ always given by $\boldsymbol{\lambda} I$ (where $I$ is the identity matrix)?

204 Views Asked by At

For a matrix $D$, is it the case that the diagonalization of $D$ is always given by $$ P^{-1} D P = \left( \begin{matrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{matrix} \right) $$ where $\lambda_1, \lambda_2$ are the eigevalues of $D$?

In our lecture notes on solving systems of first order PDEs by using diagonalization to put the system into canonical form, it says that we are required to diagonalize the matrix $D$ by explicitly finding $P$ and $P^{-1}$ and then calculating $P^{-1}DP$.

Is there any reason for using this method to diagonalize a matrix, rather than just plugging in the eigenvalues (as above)?

2

There are 2 best solutions below

0
On BEST ANSWER

For a matrix $D$, is it the case that the diagonalization of $D$ is always given by $$ P^{-1} D P = \left( \begin{matrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{matrix} \right) $$ where $\lambda_1, \lambda_2$ are the eigevalues of $D$?

If $D$ is diagonalizable, then any diagonalization $P^{-1} D P$ of $D$ has the form $$\pmatrix{\lambda_1&&\\&\ddots&\\&&\lambda_n},$$ where $\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $D$ in some order (and occur with the multiplicities of the eigenvalues).

This follows from the easy-to-check facts that

  1. the eigenvalues of a matrix are preserved by conjugation, and
  2. the eigenvalues of a diagonal matrix are precisely its diagonal entries (and the multiplicities of the eigenvalues are given by the multiplicities of the diagonal entries).

In our lecture notes on solving systems of first order PDEs by using diagonalization to put the system into canonical form, it says that we are required to diagonalize the matrix $D$ by explicitly finding $P$ and $P^{-1}$ and then calculating $P^{-1}DP$.

Is there any reason for using this method to diagonalize a matrix, rather than just plugging in the eigenvalues (as above)?

While it's true that to know the diagonalization of a matrix it's enough the know the eigenvalues, for applications one often wants to know the conjugation matrix $P$ explicitly.

For example, if one wants to solve a homogeneous, linear, constant-coefficient system $${\bf x}'(t) = A {\bf x}(t),$$ of $n$ o.d.e.s. in $n$ independent variables, the standard first step is to diagonalize $A$ (if possible), writing $A = P \Lambda P^{-1}$ for some diagonal matrix $\Lambda = \operatorname{diag}(\lambda_a)$. Then, in the new variable ${\bf y} := P {\bf x}$ we can write an diagonalized system $${\bf y}' = \Lambda {\bf y},$$ which is easy to solve: $y_a = C_a \exp (\lambda_a t) .$ On the other hand, our aim was to solve the original o.d.e. in ${\bf x}$, and to recover the solutions ${\bf x} = P^{-1} {\bf y}$ thereto we need to know the conjugation matrix $P$ (or more precisely, its inverse).

1
On

The transformation $P$ also tells you the change from original $(x,y)$ to "new" variables $(X,Y)$ that will decouple your DEs. Basically it tells you which linear combinations $X$ of $Y$ of the old variables $x$ and $y$ to choose to obtain a simple DE in $X(x,y)$ and $Y(x,y)$.

The strategy is then to solve for $X$ and $Y$, and then revert back to the original variables $x$ and $y$ using the inverse transformation $P^{-1}$.

In other words, if you want to express the solution of your coupled system in terms of the original variables, you need $P$ and its inverse. If you are only interested in the stability properties of your DEs, then knowledge of the eigenvalues is enough.

... and yes $P^{-1}DP$ should always be diagonal. That's the key since then there is no cross-terms in the DEs for $X$ and $Y$, i.e. the DE for $X$ is a function of $X$ only, and the DE for $Y$ is a function of $Y$ only.