Why do we use the zero vector for computing eigenvectors?

74 Views Asked by At

While computing the eigenvectors, we use the zero determinant. I know what it states for. However, I can't understand why we do need to have a zero vector, which is stated in the definition of eigenvectors. Is it because eigenvectors always stay on the same line, as a zero vector in this case does?

1

There are 1 best solutions below

8
On BEST ANSWER

As I understand it, you want to know the origins of the equation $$ (A-\lambda I)v = 0 $$ where $A$ is a matrix, $v$ is an eigenvector of $A$ and $\lambda$ is an eigenvalue.

The fact of the matter is that this is a quite opaque expression. It has lost a lot of resemblance to the original intent because it has been heavily manipulated algebraically.

The actual equation you want to solve (i.e. find both $\lambda$ and $v$ which work) is $$ Av = \lambda v $$ That's what "$v$ is an eigenvector of $A$" means. This is the actual equation you must understand in order to work with eigenvectors and eigenvalues. However, when it comes to finding eigenvectors and eigenvalues, it's not the best one, as $\lambda$ and $v$ "interfere" with one another; you can't really solve for just one of them. We manipulate it to $$ Av = \lambda v\\ Av - \lambda v = 0\\ (A - \lambda I)v = 0 $$ (where that $I$ pops out of "nowhere" in order for the subtraction to make sense). After this manipulation, we see that the matrix $(A-\lambda I)$ has a non-trivial kernel: $v$ is contained in it. And any matrix with non-trivial kernel must have zero determinant. This allows us to find $\lambda$ before we even start looking for $v$, which makes things simpler.