In question 6, why does determinant of $(A- \lambda I)$ gives its eigenvector ?
Why does the determinant of $(A- \lambda I) $ give its eigenvector?
1.8k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 3 best solutions below
On
Let's take $A-1I$ as an example.
We know, from the fact that $\lambda = 1$ is an eigenvalue of $A$, that $A$ is singular, i.e. it has non-trivial nullspace. Here is a useful general fact:
The nullspace of a matrix $M$ is orthogonal to the column space of $M^T$.
Once we have this fact, it is clear to see what they are doing: They have taken the second and third columns of $(A - I)^T$ (it's not really important which columns we pick, as long as they're linearly independent; the $\lambda = 3$ example picks columns 1 and 2) and they calculate the cross product of those two columns using a standard "determinant" mnemonic. This gives a vector which is orthogonal to both the columns. Since we know the column space of $(A-I)^T$ has dimension $2$ (it is a singular $3\times 3$ matrix, but the columns aren't multiples of one another, so the rank isn't $1$ or $0$), a vector which is orthogonal to those two must be orthogonal to all of the column space of $(A-I)^T$, and it must therefore be contained in the 1-dimensional nullspace of $A-I$.
So, why is this fact true? Well, let's take an arbitrary vector $v$ in the nullspace of $M$, and an arbitrary vector $M^Tw$ in the column space of $M^T$. Their dot product, $v^TM^Tw$, is a $1\times 1$ matrix, which is therefore symmetric. Which is to say, we can transpose it without changing the value: $$ v^TM^Tw = (v^TM^Tw)^T = w^TMv = w^T(Mv) = w^T0 = 0 $$ so $v$ and $M^Tw$ are orthogonal, which is what we wanted to prove.
On
Start from the eigenvalue problem: we look for $\vec{v}, \lambda$ such that
$$A \vec{v} = \lambda \vec{v} = \lambda I \vec{v}$$
where $I$ is the identity matrix. Gathering things on the left hand side, we are trying to solve
$$(A - \lambda I) \vec{v} = 0$$
but we aren't interested in the "trivial" solution $\vec{v}=0$. If the matrix $(A - \lambda I)$ is invertible, then we can multiply by its inverse on both sides and we just get $\vec{v}=0$. A matrix is invertible if and only if its determinant is not equal to $0$. So we are looking specifically for the case that
$$\det(A - \lambda I)=0$$
to find the eigenvalues of $A$.


The roots of the equation $\det(A-xI)=0$ are the eigenvalues, not the eigenvectors. This is like so because for a root $\lambda$ of this equation, the linear application $v \mapsto A-\lambda I$ is not injective, i.e. its kernel is not reduced to the zero vector.
Then for a given eigenvalue $\alpha$, the non-zero vectors $v$ for which $(A-\alpha I)v=0$ are the eigenvectors associated to the eigenvalue $\alpha$.