Eigenvector of matrix theory question

70 Views Asked by At

We have a matrix:

\begin{bmatrix} 2 & -2 & 1\\ 1 & -1 & 1\\ 0 & 0 & 1 \end{bmatrix} The eigenvalues of this matrix are:

$$\lambda_{1,2}=1$$

$$\lambda_2=0$$

I have a few questions regarding understanding why we do some things:

  1. To find the eigenvalues we have written $$det{(A-\lambda{I})}=...$$ and we calculate from there. In many books, I 've seen that the notation is $$det{(\lambda{I}-A)}=...$$ What is the difference between these two notations, to calculate eigenvalues?

  2. I've seen that eigenvectors never have the solution: \begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix} Why is that so?

And also another question is why in verified solutions of eigenvectors there is always one zero in the vector column (I mean, I 've never seen an eigenvector to be represented as $\begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}$), even if that was a solution of the $\ker{(A-\lambda{I})}$, with which we calculate eigenvectors.

If I understand kernel correctly, it represents all the vectors from one space that by a linear transformation, get transformed to zero vector in another vector space. Why is the previous assumption that $\begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}$ is also one of those vectors wrong if it is a matrix solution when we do $\ker{(A-\lambda{I}}):\quad...$