Finite dimensional implies the existence of a finite set that spans the vector space. Let V be one such vector space and let S be a finite set that spans V. The text I am following has a theorem that
Theorem 1: Any minimal spanning set of V is a basis of V.
Since we have a spanning set to begin with, we can keep on removing the linearly dependent vectors till we are left with a minimal spanning set which should then be a basis for that vector space.
However, the text I am following has the following theorem.
Theorem 2: Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space
Then the text mentions the following corollary
Corollary to Theorem 2: Every finite dimensional vector space has a finite basis
The proof is not given for the corollary. Is it really that straight forward? Does it involve something like the empty set of basis vectors, which by definition, is the basis of the set {0}, can be extended to a basis of V? That would then imply V has a basis.
I feel that something is missing.
All we know up until this point is the if a basis exists, then it is a minimal spanning set, maximal linearly independent set, and that any two sets basis vectors must have the same number of elements (which is where motivation to define dimension will start to emerge). We have not yet shown that a finite dimensional vector space has a basis and hence, we cannot assume that V has a finite basis.
So my question is how can we prove Theorem 2 without referring to any finite list of basis of V?
Line of proof for Theorem 2 given in the text: Let W be a subspace of V with basis vectors $\{w_1,w_2,...,w_k\}$. Choose a vector $v_{k+1}$ from V-W. Then the set $\{w_1,w_2,...,w_k,v_{k+1}\}$ is linearly independent. Let the span of this new set be $W_1$. Then choose any vector from V-$W_1$, say $v_{k+2}$, and add it to the set of linearly independent vectors to get the new set $\{w_1,w_2,...,w_k,v_{k+1},v_{k+2}\}$. We can keep on going like this but can the process go on forever?
This is where the text simply mentions that this process has to terminate because "the vector space is finite dimensional." To me, this is the statement that does not make sense. All we know is
- There is a finite set of vectors, say S, which spans V, and we know that
- There is a subset W of V with some basis, say $\{w_1,w_2,...,w_k\}$.
How can we use just the above facts (and maybe also some of the aforementioned theorems about basis vectors if they existed) to prove Theorem 2?
I would greatly appreciate feedback to the above query.
You can do it using the following theorem:
I culled this formulation of the theorem from this question, where it is quoted as Theorem 2.23 from Axler's Linear Algebra Done Right, but I think remember seeing something very similar in Beezer's A First Course in Linear Algebra -- it should be a standard theorem. The proof (see the linked question for details) doesn't rely on any concept of dimension or basis.
Once you have that theorem, the proof of Theorem 2 proceeds as follows. We construct a linearly independent list of vectors by the process the text describes. This process must terminate because we know that there is a finite list of spanning vectors (fact 1), and the list of linearly independent vectors cannot grow longer than that list of spanning vectors (by the theorem I quoted). Thus the list must terminate at some finite length. After the list terminates, it is a maximal linearly independent set, thus a basis.
The proof of the corollary is as you surmised. Start with the empty list, and extend it to a basis.