How to get the two eigen vectors for eigen =1

142 Views Asked by At

I have to find the eigen vectors for this matrix. \begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{pmatrix}

I end up with this matrix to plug in the eigen values. \begin{pmatrix} 1-\lambda & 0 & 1\\ 0 & 1-\lambda & 0\\ 0 & 0 & -\lambda \end{pmatrix}

Eigen values are 1,0

When I plug in zero then I get \begin{pmatrix} -1\\ 0\\ 1 \end{pmatrix}

But I'm not sure how the answer key is getting two vectors when using eigen = 1 The vectors should be: \begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix}

\begin{pmatrix} 0\\ 1\\ 0 \end{pmatrix}

3

There are 3 best solutions below

2
On BEST ANSWER

For each eigenvalue $\lambda$, use the definition of "eigenvector" and just solve

$\begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} a \\ b \\c \end{pmatrix}=\lambda \begin{pmatrix} a \\ b \\c \end{pmatrix}$.

This will furnish you with eigenvectors for eigenvalues. It turns out in this case that the eigenspace for the value $1$ is two dimensional, so you can produce two linearly independent eigenvectors for $1$.

From $\begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} a \\ b \\c \end{pmatrix}=\begin{pmatrix} a \\ b \\c \end{pmatrix}$

we learn that $\begin{pmatrix} a +c \\ b \\0 \end{pmatrix}=\begin{pmatrix} a \\ b \\c \end{pmatrix}$, so in other words $c=0$, and an eigenvector for $1$ must look like this:

$\begin{pmatrix} a \\ b \\0 \end{pmatrix}$. Now you just have to find two different sets of coefficients satisfying this such that the vectors are linearly independent: pretty easy to do in this case. You already have an answer in front of you, so see how it fits in.


This is equivalent to substituting the eigenvalue into the lambda matrix, and then computing generators of the nullspace of that matrix.

In this case, for $\lambda=1$, you would be looking for solutions to this equation:

$\begin{pmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & -1 \end{pmatrix}\begin{pmatrix} a \\ b \\c \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \\0 \end{pmatrix}$.

The resulting thing you learn is that $ \begin{pmatrix} c \\ 0 \\-c \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \\0 \end{pmatrix}$, which is just saying $c=0$.

That means you are free to choose $a,b$ in the vector $ \begin{pmatrix} a \\ b \\0 \end{pmatrix}$ as you wish, just as in my previous paragraphs.

4
On

If $\lambda$ is an eigenvalue for a matrix $A$, then an eigenvector for $\lambda$ is a vector $v$ such that $Av = \lambda v$, or $(A-\lambda I)v = 0$. So the eigenvectors corresponding to $\lambda$ are exactly those vectors whose image under $A-\lambda I$ is the zero vector --- that is, they are just the nullspace of $A-\lambda I$. Presumably you already know how to calculate the nullspace of a matrix.

0
On

If we have a matrix $A=(a_{ij})_{1\le i,j\le n}$ in the standard basis of $\Bbb R^n$ so we have

$$A e_j=\sum_{i=1}^n a_{ij}e_i$$ In our case we can read immediately from the given matrix that $$Ae_1=e_1\quad;\quad Ae_2=e_2\quad\text{and}\quad Ae_3=e_1$$ so we see that $e_1$ and $e_2$ are eigenvectors associated to the eigenvalue $1$. Of course the last eigenvalue is $0$ and we see easily that $e_1-e_3$ is an eigenvector associated to it and $A$ is similar to the diagonal matrix $\operatorname{diag}(1,1,0)$.