What Things can be Linearly Independent?

56 Views Asked by At

Is linear independence confined to strictly lists of vectors, or does it extend to vector spaces, subspaces, etc?

If it isn't just lists of vectors what are the most common things that can be linearly independent?

2

There are 2 best solutions below

0
On BEST ANSWER

Linear independence is a property of certain collections of vectors in a vector space. It guarantees that when we represent a vector as a linear combination of those vectors, the weights are unique.$^\dagger$

Say that the set $\{\mathbf{v}_1, \dots, \mathbf{v}_n\}$ is linearly independent. The definition you've likely encountered is that any time the $\mathbf{0}$ vector is expressed as a linear combination of these vectors, the weights are all zero: $$ c_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n = \mathbf{0} \quad\implies\quad c_1 = \cdots = c_n = 0. \tag{1} $$


Now if $\mathbf{w}$ is any vector in the span of $\{\mathbf{v}_1, \dots, \mathbf{v}_n\}$, there exist weights $a_1, \dots, a_n$ such that $$ \mathbf{w} = a_1\mathbf{v}_1 + \cdots + a_n\mathbf{v}_n. $$ We might encounter another linear combination yielding $\mathbf{w}$ as well, say $$ \mathbf{w} = b_1\mathbf{v}_1 + \cdots + b_n\mathbf{v}_n $$ then the following argument shows that because of the linear independence of $\{\mathbf{v}_1, \dots, \mathbf{v}_n\}$, this second set of scalars is actually the same as the first. To wit, \begin{align} (a_1 - b_1)\mathbf{v}_1 + \cdots + (a_1 - b_n)\mathbf{v}_n &= (a_1\mathbf{v}_1 - b_1\mathbf{v}_1) + \cdots + (a_n\mathbf{v}_n - b_n\mathbf{v}_n) \\ &= (a_1\mathbf{v}_1 + \cdots + a_n\mathbf{v}_n) - (b_1\mathbf{v}_1 + \cdots + b_n\mathbf{v}_n) \\ &= \mathbf{w} - \mathbf{w} \\ &= \mathbf{0}, \end{align} so these weights $$ \left\{ \begin{aligned} c_1 &= a_1 - b_1 \\ &\;\;\vdots \\ c_n &= a_n - b_n \end{aligned} \right. $$ must be zero by definition (1), hence $$ \left\{ \begin{aligned} a_1 &= b_1 \\ &\;\;\vdots \\ a_n &= b_n. \end{aligned} \right. $$


So far, this discussion of linear independence takes place in the familiar domain of a vector space $V$ over a field $\mathbb{k}$. If you're just starting to learn linear algebra, you're likely assuming that the field of scalars is $\mathbb{k} = \mathbb{R}$, the real numbers. But it doesn't have to be, and $\mathbb{C}, \mathbb{Q}, \mathbb{Z}_p$ are all regularly used in various contexts. One of the important property of a field that makes all the tools of linear algebra work is that every nonzero element $c$ of the field has a (multiplicative) inverse $c^{-1}$.

But there's a more general context in which linear independence of a set of vectors still makes sense. This is a module $M$ over a ring $R$, which loosely speaking is a vector space but the assumption that every scalar in the ground ring has a reciprocal is relaxed. A field is a special type of ring so every vector space is a module, but it's not necessary to have a ground field in order to talk about linear independence. Notice that we didn't need reciprocals anywhere in the discussion of unique representation above, just some distributive properties and cancellation: if $a_i - b_i = 0$ then $a_i = b_i$. So any algebraic gadget where such properties always hold allows for a meaningful definition of linear independence.

One of the advantages of abstraction is that we can apply definitions and theorems wherever they make sense, and we can draw analogous conclusions.


$^\dagger$ The complementary property of a set of vectors is that it spans a certain space which guarantees that such representation of a vector as a linear combination is always possible, that the weights exist. Together if we know that weights exist and are unique, i.e., there is one and only one set of weights ("coordinates") that represents any given vector, then we call such a set a basis.

0
On

This is more a collection of remarks than an answer, but is both long for a comment and posted in the spirit of not answering in the comments.

If (or since) we're splitting hairs: In my experience with introductory linear algebra books, the property of linear independence generally applies to sets of vectors (i.e., to subsets of a vector space), not lists (i.e., mappings from an index set taking values in a vector space). One can meaningfully (and perhaps usefully) define linear independence for lists, however.

In a similar vein, elementary books assume sets of vectors are ordered even when not explicitly stated. That is, "sets" of vectors are in some respects "listy." For example, one speaks of "the standard matrix of a linear transformation" which implicitly comes with an ordering of rows and columns; one has "the orientation determined by a basis" which again implicitly assumes an ordering. My book explicitly refers throughout to ordered bases, though in my experience doing so is not standard.

One practical difference between using sets and lists is that a single vector "can only appear once" in a set, but could appear more than once in a list. For example, if $v$ is non-zero, then the set $\{v, v\} = \{v\}$ is linearly independent, but the list $(v, v)$ would be linearly dependent with the "obvious" definition.

If it matters, a set of vectors containing the zero vector is linearly dependent (easy exercise), so in practice one never speaks of linear independence of a vector space or subspace.