near linearly dependent

623 Views Asked by At

I'm considering a situation where vectors are nearly linearly dependent.

Suppose that $A$ is $2 \times 2$ matrix

$A= \left[ \begin{array}{cc} 1 & a \\ a & 1 \end{array} \right] $

If $a=1$, $A$ is of course singular and linearly dependent. But if $a=0$ and $a=0.999$, $A$ is non-singular for both cases. But, apparently $a=0.999$ is more close to being singular or linearly dependent than $a=0$. In this $2 \times 2$ case, the determinant can be used to measure the closeness to being singular or linearly dependent.

Now consider $3 \times 2$ matrices $B_1$ and $B_2$

$B_1 = \left[ \begin{array}{cc} 1 & 0 \\ 1 & 1 \\ 1 & 2 \end{array} \right] $

$B_1 = \left[ \begin{array}{cc} 1 & 0.999 \\ 1 & 1 \\ 1 & 1.001 \end{array} \right] $

Mathematically, both $B_1$ and $B_2$ are linearly dependent. But, $B_2$ is more close to being linearly dependent than $B_1$.

My question is are there any ways to measure the closeness to being linearly dependent for such non-square matrix. In other words, how can I demonstrate that $B_2$ is close to being linearly dependent than $B_1$.

2

There are 2 best solutions below

1
On

Definitely. For square matrices, just calculate the determinant: if the result is close to zero, then the columns of your matrix are close to be linearly dependent (though, strictly speaking, they are not exactly linearly dependent). For a non-square matrix $B$, the columns are nearly dependent if the determinant of $B^TB$ is nearly zero.

0
On

A square matrix of rank $r$ has $r$ non-zero eigenvalues (may be multiple). So it is singular iff one of its eigenvalues is zero. The determinant (i.e. the product of all eigenvalues) is not a good quantitative measure for singularity. For example the matrix $\left[ \begin{array}{cc} 10^{10} & 0 \\ 0 & 10^{-10} \end{array} \right] $ has determinant $1$ but clearly (for me) is more singular than the identity matrix. The smallest eigenvalue by magnitude is a better measure for that (possibly normalized by the largest eigenvalue).

The generalization for non-square matrices you are looking for are singular values. The smallest singular value is zero for singular matrices and close to zero for almost singular matrices. In your example matrices they are 0.915272 and 0.001, respectively.