For a matrix $D$, is it the case that the diagonalization of $D$ is always given by $$ P^{-1} D P = \left( \begin{matrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{matrix} \right) $$ where $\lambda_1, \lambda_2$ are the eigevalues of $D$?
In our lecture notes on solving systems of first order PDEs by using diagonalization to put the system into canonical form, it says that we are required to diagonalize the matrix $D$ by explicitly finding $P$ and $P^{-1}$ and then calculating $P^{-1}DP$.
Is there any reason for using this method to diagonalize a matrix, rather than just plugging in the eigenvalues (as above)?
If $D$ is diagonalizable, then any diagonalization $P^{-1} D P$ of $D$ has the form $$\pmatrix{\lambda_1&&\\&\ddots&\\&&\lambda_n},$$ where $\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $D$ in some order (and occur with the multiplicities of the eigenvalues).
This follows from the easy-to-check facts that
While it's true that to know the diagonalization of a matrix it's enough the know the eigenvalues, for applications one often wants to know the conjugation matrix $P$ explicitly.
For example, if one wants to solve a homogeneous, linear, constant-coefficient system $${\bf x}'(t) = A {\bf x}(t),$$ of $n$ o.d.e.s. in $n$ independent variables, the standard first step is to diagonalize $A$ (if possible), writing $A = P \Lambda P^{-1}$ for some diagonal matrix $\Lambda = \operatorname{diag}(\lambda_a)$. Then, in the new variable ${\bf y} := P {\bf x}$ we can write an diagonalized system $${\bf y}' = \Lambda {\bf y},$$ which is easy to solve: $y_a = C_a \exp (\lambda_a t) .$ On the other hand, our aim was to solve the original o.d.e. in ${\bf x}$, and to recover the solutions ${\bf x} = P^{-1} {\bf y}$ thereto we need to know the conjugation matrix $P$ (or more precisely, its inverse).