What is the mistake in my matrix power formula?

89 Views Asked by At

I am working through a Linear Algebra practice test and got stuck on the following question:

Let A = \begin{matrix}5, -1\\-1, 5\end{matrix} Compute a formula for A^k where k is a positive integer. Your answer should be a single matrix.

I calculated A^2 and got \begin{matrix}26, -10\\-10, 26\end{matrix} but figured repeated multiplication was not the way to go, so I calculated A's eigenvalues/eigenvectors to diagonalize it, yielding \begin{matrix}4, 0\\0, 6\end{matrix} and the change of basis matrix \begin{matrix}1, -1\\1, 1\end{matrix} Then I inverted the change of basis matrix to \begin{matrix}1/2, 1/2\\-1/2, 1/2\end{matrix}

I multiplied \begin{matrix}4^k, 0\\0, 6^k\end{matrix} by this inverse on the left to get the putative solution \begin{matrix}(4^k)/2, (6^k)/2\\-(4^k)/2, (6^k)/2\end{matrix} but this does not produce the above result I initially got by computing A^2. Where does my mistake lie?

3

There are 3 best solutions below

0
On BEST ANSWER

As qbert wrote in his comment, you’ve gotten the change-of basis matrix and its inverse in the wrong order. I often made the same mistake until I memorized the “bulk” matrix version of the eigenvector equation $AP=P\Lambda$. In this case, you’re going from the diagonal matrix $\Lambda^k$ to $A^k$, so the $P$ on the left-hand side of the equation has to move to the right: $A^k=P\Lambda^kP^{-1}$.

0
On

$$ \frac{1}{2} \left( \begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{rr} 1 & 1 \\ -1 & 1 \end{array} \right) = I $$ $$ \frac{1}{2} \left( \begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{rr} 4 & 0 \\ 0 & 6 \end{array} \right) \left( \begin{array}{rr} 1 & 1 \\ -1 & 1 \end{array} \right) = \left( \begin{array}{rr} 5 & -1 \\ -1 & 5 \end{array} \right) $$ $$ \frac{1}{2} \left( \begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{rr} 4 & 0 \\ 0 & 6 \end{array} \right)^n \left( \begin{array}{rr} 1 & 1 \\ -1 & 1 \end{array} \right) = \left( \begin{array}{rr} 5 & -1 \\ -1 & 5 \end{array} \right)^n $$

0
On

To understand what the linear transformations are actual doing, it's better to include the basis on which the linear transformation is working into our calculations.

In the question, $A = \begin{pmatrix}5 & -1 \\ -1 & 5\end{pmatrix}$. It is diagonalised to give $D = [\mathsf{L}_A]_\beta = \begin{pmatrix}4 & 0\\0 & 6\end{pmatrix}$, where $\beta$ is the orthogonal basis formed by the eigenvectors $\beta = \left\{\begin{pmatrix} 1 \\ 1 \end{pmatrix}, \begin{pmatrix} -1 \\ 1 \end{pmatrix} \right\}$. This gives $P = [\mathsf{Id}]_\beta^\gamma = \begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}$, where $\gamma$ represents the standard basis.

$P^{-1}$ is actually the change of basis matrix from $\gamma$ to $\beta$.

$$P^{-1} = [\mathsf{Id}]_\gamma^\beta = \begin{pmatrix}1/2 & 1/2 \\ -1/2 & 1/2\end{pmatrix}$$

From this, we see that $D^k = [\mathsf{L}_{A^k}]_\beta$.

The last equation in the question body is obtained by multiplying $D^k$ on the left by $P^{-1}$. When we include the bases into our writing, it becomes $$P^{-1} D^k = [\mathsf{Id}]_\gamma^\beta [\mathsf{L}_{A^k}]_\beta,$$ which doesn't make sense. To find $A^k$, we can write $$A^k = [\mathsf{L}_{A^k}]_\gamma = [\mathsf{Id}]_\beta^\gamma [\mathsf{L}_{A^k}]_\beta [\mathsf{Id}]_\gamma^\beta = P D^k P^{-1}.$$ It's easier to compute $\mathsf{L}_A$ with respect to $\beta$ (diagonalised) than to $\gamma$ (not diagonal), so we first apply a change of basis matrix from $\gamma$ to $\beta$, so that we can carry out $\mathsf{L}_{A^k}$ easily, then we convert output back to a vector with coordinates with respect to $\gamma$.