Calculating eigenvectors when eigenvalues is know

126 Views Asked by At

I have been searching around for a while to find a very simple example which could make it clear for me how to find the eigenvectors, when the eigenvalues is already known. But unfortunately I don't get it.

My question looks like this:

Question

Notice that it has to be by hand with all the steps. I only need to know how to calculate the eigenvectors.

I'm used to do it in Matlab but this is an old exam question so I guess I should learn how to do it by hand as well.

I have been looking in the solution:

Solution

But I could use a little bit more explanation than that.

Greetings Christian

2

There are 2 best solutions below

0
On

By thefinition, if $\lambda$ is an eigenvalue of $A$, then the eigenvectors corresponding to $\lambda$ are the non-null vectors $v$ such that $A.v=\lambda v$. But$$A.v=\lambda v\iff(A-\lambda\operatorname{Id}).v=0.$$That's what is being done in that solution that you posted: for each eigenvalue $\lambda$, the author(s) compute non-null elements of $\ker(A-\lambda\operatorname{Id})$, that is, non-null vectors $v$ such that $(A-\lambda\operatorname{Id}).v=0$.

0
On

There is a neat little trick for finding the eigenvectors of a low dimensional matrix. By the Cayley-Hamilton theorem a matrix $A$ satisfies its own characteristic polynomial: $p(A) = 0$, where $p(x) = \det(A - xI)$.

Since the characteristic polynomial's roots are the eigenvalues, this gives $(A - \lambda_1 I)(A - \lambda_2 I)...(A - \lambda_n I) = 0$, where the $\lambda_i$ are the eigenvalues of $A$, including multiplicities. Because $A$ commutes with itself and $I$, it doesn't matter what order the eigenvalues are in, so we can take $\lambda_1$ to be which ever eigenvalue you are interested in.

This means that $(A - \lambda_1 I)$ annihilates the product of the other factors, which in turn means it annihilates every column of that product, and therefore any non-zero column must be an eigenvector of $A$ associated with $\lambda_1$.

For example, if $A$ is a $3 \times 3$ matrix with eigenvalues $1, 2, 3$, then the non-zero columns in $(A - 2I)(A - 3I)$ are eigenvectors of $A$ associated with the eigenvalue $1$. The non-zero columns of $(A-I)(A - 3I)$ are eigenvectors associated with $2$, and the non-zero columns of $(A-I)(A-2I)$ are eigenvectors associated with $3$. You can find an example on the Eigenvalue algorithm page of Wikipedia.

For larger dimensions, the work of taking all those products gets to be more heavy than alternative methods, but for 2 and 3 dimensions, it is an easy process (once you know the eigenvalues).