In a square matrix, I usually tell if it is linearly independent by reducing it to row-echelon form and seeing if there are is an c1...cn, where c1*x1..cn*xn = 0 (c1 through cn are scalars, x1 through xn are vectors, and 0 is the 0 vector). However, this approach is less clear for non-square matrices. For example, let's say I have the matrices:
a = [ 3 2 2
2 3 -2 ]
b = [ 1 2 3
1 3 4 ]
By intuition, I would have guessed that matrix is linearly independent. However, after playing around, I saw that c1=-4,c2=4,c3=2 produces the zero vector which indicates linear dependence. Matrix b is perhaps more obvious due to the construction, but it is linearly dependent because c1=1,c2=1,c3=-1 produces the 0 vector. However, I am not sure how to reach this conclusion in a full-proof, rigorous, structured way (either by reducing the matrices somehow or recalling matrix properties that can help indicate whether the matrix is linearly independent/dependent). This was more of a guess and check approach that I want to formalize.
I would love some guidance and explanations (perhaps applying it to a new example or the provided examples). Thank you for your help -- this will be extremely useful for my learning/understanding!
As you know, the row rank of a matrix is the same as the column rank of the same matrix.
If for example you have less rows than columns, your columns can not be independent.
Since the column rank which is the same as the row rank does not exceed the number of rows which is less than the number of columns.