Why can the determinant be assumed to be 0?

21.2k Views Asked by At

I'm trying to work through how to calculate eigenvalues and eigenvectors.

I start with

$$Ax=\lambda x$$

Where $A$ is a $p \times p$ matrix, $\lambda$ is the eigenvalue and $x$ is the eigenvector.

This is the same as:

$$Ax=I\lambda x$$

$$Ax-I\lambda x=0$$

$$(A-I\lambda) x=0$$

We define the matrix $A$ as a $2 \times 2$ matrix:

$\begin{bmatrix}4 & -2\\-3 & 6\end{bmatrix}$

Thus this -$I\lambda$ equals

$\begin{bmatrix}4-\lambda & -2\\-3 & 6-\lambda\end{bmatrix}$

$$Det(A-I\lambda)=(4-\lambda(6-\lambda)-(-3)*-2)$$

$$Det(A-I\lambda)=24-10\lambda +\lambda^2 -6$$ $$Det(A-I\lambda)=18 - 10\lambda + \lambda^2 $$

Then, out of the blue my textbook claims that

$$0=30 - 10\lambda + \lambda^2 $$

How do I justify setting the determinant to $0$?

(I do "not" have an advanced knowledge in linear algebraic analysis, I only know how the determinant is used to calculate the inverse matrix)

9

There are 9 best solutions below

11
On BEST ANSWER

The text is not claiming that the determinant is $0$. The text is saying "Let's find out for which values of lambda the determinant is $0$!"

So the determinant is $\lambda^2 - 10\lambda + 30$, and you want to find the $\lambda$ such that it is equal to zero. What do you do? You set it equal to zero and solve for $\lambda$. That is, you solve the equation

$$\lambda^2 - 10\lambda + 30 = 0$$


As for why you are interested in the values of $\lambda$ that make the determinant equal to $0$, remember that

$$rank(A-\lambda I) = n \iff det(A - \lambda I) \neq 0$$

So, if $det(A-\lambda I) \neq 0$, you will find that the only solution to $(A - \lambda I)x = 0$ is $x = 0$ (due to the fact that the rank of the matrix is full, hence the kernel only contains the $0$ vector). This means that the only $x$ such that $Ax = \lambda x$ is $x=0$, which means that $x$ is not an eigenvector.

So the only way to have eigenvectors is to have the determinant of $A - \lambda I$ be equal to zero, so that's why to find eigenvalues you look for the values of $\lambda$ that make $det(A - \lambda I) = 0$

5
On

The determinant of a $n\times n$ matrix $M$ is equal to $0$ if and only if the rank of the matrix is smaller than $n$, which happens if and only if the kernel of the matrix is non-empty, which happens if and only if there exists some vector $x\ne0$ such that $Mx=0$.


Therefore, $\lambda$ is an eigenvalue of $A$ $\iff$ the determinant of $A-\lambda I$ is equal to $0$.

5
On

For a square matrix like $M = (A - \lambda I)$, the equation $Mx = 0$ will have a non-zero solution $x$ if and only if $M$ doesn't have an inverse, which is true if and only if the determinant of $M$ is $0$.

0
On

Note that if you try to find an eigenvector directly, and you take the coordinates of the eigenvector to be a,b then you have

$$Ax=\lambda x$$

$$A\begin{bmatrix}a \\b \end{bmatrix}=\lambda \begin{bmatrix}a \\b \end{bmatrix}$$

Expanding out that equation, you get

4a-2b = $\lambda $a

-3a+6b = $\lambda $b

which is equivalent to

(4-$\lambda $)a-2b = 0

-3a+(6-$\lambda $)b = 0

So b = (4-$\lambda $)a/2

Substituting that into the bottom equation

-3a+(6-$\lambda $)(4-$\lambda $)a/2 = 0

Factoring out a

-3+(6-$\lambda $)(4-$\lambda $)/2 = 0

-6+(6-$\lambda $)(4-$\lambda $) = 0

And now we're back to the equation that was derived from setting the determinant to zero. Setting the determinant of a matrix to zero is simply using the properties of matrices to get to that equation quicker.

0
On

Here is another way to look at your problem. You started with

$$Ax=I\lambda x$$

and you reasoned

$$Ax-I\lambda x=0$$

$$(A-I\lambda) x=0 \tag{1.}$$

Let the columns of $A-I\lambda$ be $v_1, v_2, \dots, v_n$. Then equation $(1.)$ can be written as

$$v_1x_1 + v_2x_2 + \cdots +v_nx_n = 0$$

In other words, the columns of $A-I\lambda$ are linearly dependent. That implies that the determinant of $A-I\lambda$ is zero.

0
On

Here's a geometric interpretation of Ant's answer: the determinant tells you what happens to a unit volume of space after applying your transformation.

For example, the identity map $I$ leaves everything alone, so volume stays the same, so the determinant of $I$ is $1$. A multiple of the identity $rI$ stretches everything by a factor of $r$ in all $p$ directions, so the determinant of $rI$ is $r^p$.

In general, if your transformation $A$ has a set of eigenvectors $v_i$ which span your space then, in the direction of $v_i$, $A$ stretches things by a factor of the corresponding eigenvalue $λ_i$, and so overall it multiplies volume by the product of all the $λ_i$. So the determinant of $A$ is just the product of its eigenvalues - counted by multiplicity, i.e. according to how many independent eigenvectors each one has.

As for the property I stated, that the determinant equals the factor by which a volume of space increases in size: well, you can take that as a definition, and then check that it corresponds to the formula you're familiar with, e.g. by looking at what happens to the unit $p$-dimensional cube spanned by your basis vectors. (This definition also explains why it's involved in the calculation of inverses!)

To make the connection with your question explicit: if the determinant equals the product of the eigenvalues, then it will be zero exactly when one of them is zero. $A v = λ v$ is equivalent to $(A − λI) v = 0$, which says that $v$ is an eigenvector of $A − λ I$ with eigenvalue $0$, so the determinant of $A − λ I$ must be $0$ since it is the product of the eigenvalues.

0
On

Let me give you some intuition that's not so algebraic but more geometric. Before I start, I'd like you to remember these:

  • A matrix represents some sort of linear transform from one space to another.
  • The determinant of a matrix represents how many times the volume of any object has after the transform, compared to its original volume.

You can think of the equation

$(A - \lambda I)x = 0$

as applying the transform $(A - \lambda I)$ to a vector $x$, so that $x$ becomes a zero vector.

No matter what that transform looks like, it must involve squashing the original space into a flatter space along the vector $x$, for example, squashing a 3D space into a 2D plane, or even a 1D line or a 0D point. The space after the transform is at least one dimension less than the original space, so the volume of any object in the original space becomes zero, so the determinant of $(A - \lambda I)$ has to be zero.

P.S. my intuition comes from this series of video given by 3Blue1Brown: Essence of Linear Algebra

1
On

I had the same question as well as initial response to the explanation by @Ant so maybe this might help.

I looked up the properties of invertible matrices https://en.wikipedia.org/wiki/Invertible_matrix#Properties and reasoned thus (changing the notation for the matrix from $A$ to $P$):

Recast $(P-I \lambda)$ $x$ $=$ $0$ as the familiar system of linear equations $Ax=b$.

If the matrix $A$ is invertible, then there exists exactly one solution to the equation $Ax=b$.

Since $b=0$, if $A$ is invertible then that one solution to the equation is $x=0$. But we don't want $x=0$ as that is not a useful result.

So let's make $A$ non-invertible. We can do that by setting $\det(A) = \det(P-I \lambda) = 0$.

1
On

As per the Eigen vectors for a transformation matrix , the transformation on such vector will result in scaling it to some values λ. Let X be the Eigen vector and A be the transformation matrix of X. Thus, AX = λX (λ being the scaling factor) AX-λX=0 (A-λ*I)*X = 0

Now this (A-λI) can be considered as another transformation matrix on X which needs to be 0. According to a algebraic rule if a transformation matrix transforms the vector X to zero , then its determinant must be zero. Which is exactly the condition we need to satisfy by finding λ(We call it Eigen values from here on) using det(A-λI)=0. Thus we make the condition det(A-λ*I) =0 and find λ (Eigen value) from it. Then supplying back Eigen values we get its corresponding Eigen vectors.

This is something I understood from why we need to use determinant .