Why does a determinant of $0$ mean the matrix isn't invertible?

6.3k Views Asked by At

I always got taught that if the determinant of a matrix is $0$ then the matrix isn't invertible, but why is that?

My flawed attempt at understanding things:

This approaches the subject from a geometric point of view. Take two $2\text x2$ matrices, by definition $A$ has an inverse if there exists a matrix $B$ such that $AB=I$, here $B$ will be denoted as $A^{-1}$.

From my understanding, a determinant of $0$ means that the space will be "compressed" to a one dimensional line or point. Taking an arbitrary matrix $A$, if we apply any linear transformation to it and get a point, we won't be able to get back to $I$ in $2$ dimensions regardless of the linear transformation we apply as we have a point and we can't really stretch it and play around with it like a vector.

Why I realized my attempt is flawed:

While writing this I remembered how linear transformations from one dimension to another exist so it wouldn't make much sense to say we can't get back to $I$ in two dimension once we have a vector in one dimension (still can't really understand the flaw if we get a point instead of a vector).

Can anyone correct my approach and/or provide an algebraic one as well?

4

There are 4 best solutions below

5
On BEST ANSWER

You're almost there: the inverse transformation is one-to-many, since there are infinitely many points which project to the same point in the original transformation. This means the columns of the matrix are not linearly independent (as $\hat{i}$ and $\hat{j}$ both lie on the 1D line), so the matrix is not invertible.

3
On

You can show the formula \begin{align} M \times \left(\mathrm{com}M\right)^T = \det M \cdot I_n \end{align}

Where $\mathrm{com}M$ is the comatrix of $M$, a matrix constructed with the coefficients of $M$. Thus, if $\det M$ is invertible, you can write it $M \times \dfrac{\mathrm{com}M^T}{\det M} = I_n$ and $M$ is invertible.

If $\det M = 0$, on the contrary, two cases : if $M$ has rank $< n-2$, then it is clearly not invertible. If it has rank $\geqslant n-1$, $\mathrm{com}M$ is a non-zero matrix. Thus you have found a non-zero matrix $B$ with $M\times B = 0$ and $M$ cannot be invertible.

2
On

All the matrices will be $n \times n.$ Suppose $M$ is invertible and $\det M=0.$ By the definition of invertibility, there exists a matrix $B$ such that $$BM=I.$$ Then $$\det (BM)=\det(I)$$ $$\det(B)\det(M)=1$$ $$\det(B) \cdot 0=1 $$ $$0=1,$$ a contradiction.

0
On

This is really a supplement to the other good answers that you’ve already gotten. There are a couple of details in your question that I think are worth drilling down into. I’ll assume that we’re working in a Euclidean space and use “point” and “vector” interchangeably, as is commonly done.

First, since we’re looking at $2\times2$ (presumably real) matrices, we’re talking about linear transformations of the plane $\mathbb R^2$: linear transformations that map from a vector space to one with a different dimension aren’t directly relevant. The image of that transformation might well be a $1$- or even $0$-dimensional subspace of $\mathbb R^2$, but it still lives within that larger-dimensional space.

Since you’re looking at things geometrically, the order in which you apply the original transformation and its inverse matters because with different orders the operations have somewhat different geometrical interpretations.† Using the usual convention of left-multiplying column vectors by the transformation matrix, the expression $AB$ means that you first send a point somewhere with $B$, then apply the transformation represented by $A$ to that (and hope you got back to where you started). That’s a little bit different from $BA$: in that case we first apply the $A$-transformation to a vector, then send it “back” with $B$.

In the first situation, $AB$, it’s quite clear that there’s no possible $B$ when the determinant of $A$ is zero. $AB$ has to send every point to itself, but if the image of $A$ is either a line or point, there are a lot of points that are never reached by $A$—there’s no place to send them so that $A$ will send them back.

With $BA$, on the other hand, we can see that there’s no way to send a vector back by using linearity of the transformations. Since a linear transformation always maps the zero vector to itself, if $A$ collapses the entire space to a point, then that point must be the zero vector. Any possible $B$ can only map that to itself, so you’re stuck there: we can’t get back to the original vector. If $A$ collapses the space to a line, we have a similar problem: since $0$ goes to $0$, that line passes through the origin so for all $\mathbf v$, $A\mathbf v=\lambda\mathbf w$ for some fixed vector $\mathbf w$. Unfortunately, $B(\lambda\mathbf w)=\lambda(B\mathbf w)$, so the best we can do when trying to send a point back is to map it to a point on some other fixed line. If the original point was on this line, great, but otherwise we’re stuck again—we can’t get off that line. Thus, there’s no matrix $B$ for which $BA\mathbf v=\mathbf v$ for all vectors $\mathbf v\in\mathbb R^2$.


† In general, matrices can have left or right inverses. When the matrix is square, its left and right inverses, if they exist, are identical.