Why a linearly independent set of vectors must not contain the zero vector?

8.2k Views Asked by At

Why is it necessary for a linearly independent set of vectors to not contain the zero vector? I am looking with the definition perspective i.e. why do we define linear independence in this way?

1

There are 1 best solutions below

1
On

A big part of what makes the definition of "linearly independent" so useful is that it gives a robust notion of "basis" and "dimension": a basis is a linearly independent set which spans the entire vector space, and any two bases for a vector space have the same number of elements, which we call the dimension of the space. Any two vector spaces of the same dimension are isomorphic. These basic facts are fantastically powerful, and (when paired with the fact that any linearly independent set can be extended to a basis) are arguably the main reason that linear algebra is such a central part of mathematics.

If you allow the zero vector to be in a linearly independent set, all of this breaks down. You could take any basis and add the zero vector to it, so a vector space can have bases of different sizes. Furthermore, it would no longer be true that the size of a basis determines the vector space up to isomorphism (for instance, if $K$ is your scalar field, then both $K$ and $K^2$ have a basis of size $2$, namely $\{0,1\}$ and $\{(0,1),(1,0)\}$).

In a certain sense, allowing the zero vector to be in a linearly independent set is much like considering the integer $1$ to be prime: the purpose of primes is to be able to factor other numbers into them, but if you allow $1$ to be prime these factorizations are no longer unique, because you can add in as many copies of $1$ as you like. Similarly, any vector space can be "factored" into a basis (or more abstractly, split as a direct sum of simple vector spaces), but the number of terms in this factorization would not be unique if zero could be a basis element.