looking for some clarification on a couple things related to the eigendecomposition of a square matrix. Suppose we have a square n x n matrix, A, and we are interested in finding its eigenvectors and eigenvalues.
I have seen it written that we would want to find a matrix Q such that QAQ^-1 is a diagonal matrix. Upon finding Q, we could say that the ith row of that matrix Q would represent an eigenvector and that the diagonal entries of the matrix that is the product of QAQ^-1 would be the corresponding eigenvalues.
What has me confused is when looking at the wikipedia page for eigendecomposition, it states the problem differently - that is, that if we have a square matrix A can be factorized as A = QDQ^-1, where D is the diagonal matrix, and that the columns of Q are the eigenvectors and the diagonal elements of D their corresponding eigenvectors.
Is there a better way to understand the rules of matrix algebra involved here that make these two approaches consistent? In particular, it's the distinction between referencing the rows vs columns in the 2 approaches in order to extract the eigenvectors that i'd like to understand better. Many thanks!