I read the following explanation that if I take a set of vectors and transfer it into a matrix, I can use gaussian elimination to check if I can reach a row of zeroes. If it's non reachable then the vectors are linearly independent.
I guess I have some sort of hole in my knowledge of using gaussian elimination method.
suppose I have the following set of vectors : $(1,6,2),(2,3,-3),(1,5,4)$
writing in matrix form: $\begin{bmatrix} 1& 6 &2 \\ 2& 3 &-3 \\ 1& 5 & 4 \end{bmatrix}\rightarrow \begin{bmatrix} 1& 6 &2 \\ 0& -9 &-7 \\ 0& -1 & 2 \end{bmatrix}\rightarrow\begin{bmatrix} 1& 6 &2 \\ 0& -9 &-7 \\ 0& 0 & -25 \end{bmatrix}$
This example is given, and it says that there is no way to reach a row of zeroes.
My question is what is the mark that any furter operations will not get me to the desired outcome of a row of zeros. I know that using gaussian elimination I can add rows and multiply each row by scalar. But when do I stop doing furter manipulations on the matrix?
The determinant of a matrix is invariant under any elementary row or column operations$^{(*)}$.
Once you have reached a triangular matrix, the determinant of all these matrices is the same and is equal to the product of the diagonal entries in this triangular matrix. In particular it is non-zero. If you could reach a row of zeroes, the determinant would be zero.
$^{(*)}$[By elementary row/column operations I mean adding a multiple of one rown/column to another row/column. Advanced row/column operations consist in multiplying a row/column by a non-zero constant, or switching two rows/columns; advanced operations do change the determinant, but you don't need them to reach the row echelon form].