Are there singular matrices such that if we change any entry it will be non-singular?

1.5k Views Asked by At

Prove or disprove: for each natural $n$ there exists an $n \times n$ matrix with real entries such that its determinant is zero, but if one changes any single entry one gets a matrix with non-zero determinant.

I think we may be able to construct such matrices.

5

There are 5 best solutions below

0
On

Let all the entries of the matrix be algebraically independent except for one entry chosen so that the determinant is zero and the matrix is singular. The algebraic independence guarantees that if you change any entry by a small amount then you make a nonzero change in the determinant and get a nonsingular matrix.

2
On

Choose any matrix with rank $n-1$ that does not have any of the standard unit vectors in its column space.

Added in response to the comment by alex.jordan.

Let $A$ be an $n \times n$ matrix with $rank(A) = n-1$ such that there are vectors $a, \, \tilde a$ with $Aa = 0, \tilde a^T A = 0$ that have all entries $ a_i , \tilde a_i \ne 0$. Then any matrix $B$ that differs from $A$ in exactly one entry has full rank, i.e. $\det B \ne 0$.

To prove this, consider such a $B$. After permuting rows and columns and rescaling, we may assume that $B_{1,1} = A_{1,1} + 1$.

First note that the first column of $A$ is a linear combination of columns $2, \dots, n$, since $Aa = 0$ and $a_1 \ne 0$. The column space of $B$ certainly contains the column space of $A$ and thus $rank(B) \ge n-1$.

If $rank(B) < n$, then $A$ and $B$ must therefore have the same column space. Hence the standard unit vector $e_1$ is in the column space of $A$, that is $Ac = e_1$ for some vector $c$. But then $0 = \tilde a^TAc = \tilde a e_1 = \tilde a_1$, contradicting the assumptions for the left and right null vectors of $A$.

Therefore $rank(B) > n-1$ and $\det B \ne 0$.

6
On

Yes (for $n>1$), in fact almost every (in some quite generic sense) singular matrix has this property.

To see this we observe that the property can be expressed else-way as a stronger property. Any matrix with one-dimensional null-space and where the rows/columns are all linearly dependent (that is can be written as a linear combination with all coefficients being non-zero).

To put it in laymans terms it's that it's very unlikely that the spanning vectors are lineary dependent, and given they are and we single out one that can be described as a linear combination of the other it's very unlikely that the remaining are lineary dependent. This means that given a singular matrix it's almost sure that it has null space of dimension one, and it's very unlikely that the null space would align with one of the axes.

The assumption is a measure $\mu$ on the set of (singular) matrices that is invariant under rotation of the coordinate system, assigns bounded hypersurfaces (and subset thereof) finite measure, assigns surfaces of lesser dimension a null measure (and assigns a the set of singular matrices bounded by a positive radius a positive measure).

So what we have to prove is that the set of matrices that have one of the axes as null-space (the other sets with null space of dimension larger than one forms a surface of dimension less than $n-1$ and are therefore of zero measure).

To prove that the set of matrices that has one of the axes as null-space is to have a $r>0$ and consider the set $B_r$ of matrices such that $||A||<r$, now $\mu(B_r)>0$. Now consider the set of matrices in $B_r$ with null space being one of the axes, now we can by rotation create infinite number of disjoint sets $S_j$ having different (one dimension) null spaces. Now we have $\bigcup S_j\subseteq B_r$ so $\sum\mu S_j = \mu\left(\bigcup S_j\right) < \mu(B_r)$, but as $\mu(S_j)$ are all the same we must conclude that $\mu(S_j) = 0$.

Finally we have to prove that the matrices that have one dimensional null-space, but where the rows/columns are not all linearly dependent have null measure. This is done by showing that they form a surface of dimension less than $n\times n-1$. To show this we create parameterizations of the surfaces. This is done by noting that such a matrix can be determined by specifying all rows except one that is linearly dependent on the other together with the linear combination of the other rows. Normally this means that we have $n\times(n-1)$ coordinates for the rows, and the linear combination gives $n-1$ more coordinates. However if we put the restriction that one of them is not dependent on the others we have a situation where one of the coefficients in the linear combination is zero which leaves us with less than $n-1$ more coordinates. Now this approach will result in one parameterization for each independent row we select and one for each dependent row we select, but there is finite number of combinations.

To see why the condition is sufficient now consider rows $u_j$ and coefficients $c_j\ne 0$ such that $\sum c_ju_j = 0$, and assume that the null-space of the matrix doesn't align with any axis. The property is to say that $\xi e_k$ added to any of the $u_j$ will result in the vectors being independent. Now assume that we add to $u_1$ and assume that we have a linear combination that equals zero, then we would have $\xi e_k + \sum b_ju_j = 0$, which means that $e_k = -\xi^{-1}\sum b_ju_j$ which means that $e_k$ is in the range of the matrix and therefore not in the null space.

0
On

Take a look at $$M=\begin{bmatrix}1&0&0&\cdots&\cdots&0&(-1)^n\\1&1&0&\cdots&\cdots&0&0\\0&1&1&\cdots&\cdots&0&0\\0&0&1&\ddots&\cdots&0&0\\\vdots&\vdots&\vdots&\ddots&\ddots&\vdots&\vdots\\0&0&0&\cdots&\ddots&1&0\\0&0&0&\cdots&\cdots&1&1\end{bmatrix}$$

$M$ is singular because the alternating sum of the first $n-1$ columns gives you the last column: $$(-1)^nc_1+(-1)^{n+1}c_2+\cdots+(-1)^{n+(n-2)}c_{n-1}=c_n$$

What happens if a $0$ is changed to a $1$? Or a $\pm1$ is changed to a $0$?

[Hint: consider this matrix over the field $\mathbb{F}_2$. Over that field it's easier to show that the resulting columns would be linearly independent. This implies no nontrivial integer-linear combination on $M$'s columns can be zero, when $M$ is viewed as a matrix over $\mathbb{Z}$ (since a reduced form of that linear combination would involve at least one odd scalar). And that in turn implies no nontrivial rational-linear combination on $M$'s columns can be zero when $M$ is viewed as a matrix over $\mathbb{Q}$ (since otherwise you could multiply through by a least common denominator). And since all entries are rational, no nontrivial real-linear combination on $M$'s columns can be zero when $M$ is viewed as a matrix over $\mathbb{R}$.]

0
On

If you want a constructive proof, the following example works over any field. Consider the $n\times n$ matrix $$ A=\pmatrix{I_{n-1}&-\mathbf1_{n-1}\\ -\mathbf1_{n-1}^T&n-1}, $$ where $\mathbf1_{n-1}$ is the all-one vector. Since $A$ is a symmetric rank-$(n-1)$ matrix whose null space is spanned by $\mathbf 1_n$, its adjugate is a scalar multiple of the all-one matrix $\mathbf 1_n\mathbf 1_n^T$. As the $(n,n)$-th minor of $A$ is equal to $1$, we see that $\operatorname{adj}(A)$ is precisely $\mathbf 1_n\mathbf 1_n^T$. Therefore, by the rank-one update formula for determinant, we have \begin{aligned} \det(A+ae_ie_j^T) &=\det(A)+ae_j^T\operatorname{adj}(A)e_i\\ &=\det(A)+ae_j^T\mathbf1_n\mathbf1_n^Te_i\\ &=\det(A)+a\\ &=a.\\ \end{aligned}