I'm going through Linear Algebra by Friedberg et al. and I'm having trouble with the buildup to the main ideas regarding span, linear independence, basis, and dimension. I'll detail what I've taken away so far about each concept and where I get confused. (Note - I'm mainly focusing on the finite dimensional vector spaces).
A set spans a vector space $V$ if all linear combinations (LCs) of the set result in the vector space. There are many possible spanning sets of a vector space.
Spanning sets that have fewer elements in them can express other vectors in $V$ more efficiently (computation-wise) since less vectors are used in forming LCs. Finding such sets motivates the notion of linear independence.
Question A: Sometimes the book refers to spanning sets that have less vectors in them as "smaller" spanning sets, and other times "minimal spanning sets". Are these equivalent?
The book then covers linear independence, which makes sense to me.
After this the book relates linear independence and span of a set: adding/removing dependent vectors does not affect the span (since they're already in the span), whereas adding/removing independent vectors does affect span. There's a theorem which states this more formally, using $S \cup {v}$, where $S$ is a linearly independent set and $v$ is a vector not in $S$. Maybe I'm missing something here conceptually from the formal statement in the book.
Question B: At this point can we conclude that the smallest spanning set is a linearly independent set?
- A basis is a linearly independent set which spans $V$. A basis can express each vector in $V$ with a unique LC. A finite spanning set can be reduced to a basis for $V$.
Question C: So far not much has been said about the sizes of spanning sets, linearly independent sets, and bases of $V$, which is why I get confused about the last sentence in 5). We haven't definitively stated how big/small each of the three sets can be, so can make this remark?
Replacement Theorem: my understanding for this is as follows - the size of a linearly independent subset in $V$ always $\leq$ the size of a spanning set of $$V, and the linearly independent subset can be made to span $V$ if elements from the spanning set are added to it.
Some results from theorem are: number of basis elements is unique = dimension, the maximum number of linearly independent vectors in $V = dim(V)$.
Thanks in advance, I'm a current engineering university student! I have taken a couple of classes in linear algebra before: one was computational, the other was definitely more theoretical but probably not at the level required for mathematics majors (it was a class for engineering students). Out of curiosity and to attain a deeper understanding I decided to go through this book.
Not quite. If you remove some vectors from a spanning set, in such a way that you don't damage the span, you get a smaller spanning set. Such a set is not necessarily minimal; it's possible that you could remove even more vectors without damaging the span. If you keep removing vectors without damaging the span, eventually you'll get a spanning subset with the property that removing any vector from it will cause the subset to no longer be spanning. This is what it means to be minimal: there's guaranteed to be no more reductions possible.
As an example:
This is a spanning set of $\Bbb{R}^2$: $\{(1, 0), (1, 1), (3, 2), (2, -5)\}$.
This is an example of a spanning set that is "smaller" than the above: $\{(1, 0), (1, 1), (2, -5)\}$.
This is an example of a minimal spanning set (contained in the above spanning sets): $\{(1, 0), (1, 1)\}$.
I should just correct you there: $v$ is not just vector not in $S$. We actually need $v \notin \operatorname{span} S$. Note that if you cannot add such a vector, then it means that $S$ is spanning (there is no vector in the space that is not already in the span).
Your understanding is correct otherwise: if a vector is dependent on other vectors in the set $S$ (i.e. if $v \in \operatorname{span} S$), then adding $v$ to $S$ will not increase the span, or removing $v$ from $S$ will not decrease the span. Inversely, if $v$ is not a linear combination of other vectors in $S$, then adding/removing $V$ from $S$ will strictly increase/decrease the span.
And yes, using this result, we can conclude that minimal spanning sets are linearly independent. If $S$ were linearly dependent, then one of the vectors in $S$ lies in the span of the other vectors in $S$. Removing this vector does not decrease the span (i.e. the set with this vector removed is still spanning), contradicting minimiality.
We don't need any solid numbers for the sizes of these sets. The point is, if we start with a finite spanning set, and start removing redundant elements (i.e. remove $v$ from $S$ such that $v \in \operatorname{span}(S \setminus \{v\})$), the set is going to get smaller and smaller. Eventually, we need to stop removing elements, since we only have a finite number of them!
If we only remove redundant elements, then we don't damage the span, but at some point we have to stop. At that point, there will be no redundant vectors left, and the resulting subset will be minimal and hence linearly independent. So, using this process of greedily removing redundant vectors, we can pare down any spanning set to a basis.