I found the following derivation in my textbook confusing:
For a $2\times2$ matrix $V=\left[ {\begin{array}{cc} a & b \\ c & d \\ \end{array} } \right]$, we know its eigenvalues "$\lambda_1$" and "$\lambda_2$".
Let $P$ be the 2-by-2 matrix with column eigenvectors, i.e.$P=(x_1, x_2)$, we have: $V=P\left( {\begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2 \\ \end{array} } \right)P^{-1}$ --------(1)
The trace of $V^N$:
$Trace (V^N)=\lambda_1^N+\lambda_2^N$. --------(2)
I understand that by substituting (1) into (2), we can cancel $P$ and $P^{-1}$ out. But what about the first $P$ and the last $P^{-1}$? How are they canceled out?
They aren't. But$$V^N=P\begin{pmatrix}{\lambda_1}^N&0\\0&{\lambda_2}^N\end{pmatrix}P^{-1}.$$Therefore, $V^N$ and $\left(\begin{smallmatrix}{\lambda_1}^N&0\\0&{\lambda_2}^N\end{smallmatrix}\right)$ are similar and so they have the same traces.