I’m reading Linear Algebra Done Right by Sheldon Axler. Theorem 2.23 states that the length of a linearly independent list is always less than or equal to the length of a spanning list in a finite dimensional vector space. To prove this, Axler uses a process of inductively adding vectors from the basis to the spanning set, and then deleting elements of the spanning set from this list, so you’re left with a list that contains all elements of the basis and potentially some elements of the spanning set you started with.
Does anyone know of an alternative way of proving this fact? To me this seemed like a not completely intuitive process to prove something so simple.
Full description of proof here: Length of linearly independent list is less than or equal to length of spanning list
Sure: suppose $\mathbf{l}_1, \ldots, \mathbf{l}_n$ is any sequence of vectors and that $\mathbf{s}_1, \ldots, \mathbf{s}_m$ spans. Each $\mathbf{l}_j$ can be written as a linear combination of the $s_i$, so for each $j$ we may write $\mathbf{l}_j = \sum_i a_{ij} \mathbf{s}_i$ for scalars $a_{ij}$. The matrix $A = (a_{ij})$ is $m \times n$ and if $m < n$ there are more columns than rows ("more variables than equations") so there must be a non-zero solution $\mathbf{v}$ to $A \mathbf{x} = \mathbf{0}$. That means $\sum_j a_{ij}v_j = 0$ for all $i$, so $\sum_j v_j \mathbf{l}_j = \sum_j v_j \sum_{i}a_{ij}\mathbf{s}_i = \sum_i \left( \sum_j a_{ij}v_j \mathbf{l}_i \right) = \sum_i 0\cdot \mathbf{l}_i = \mathbf{0}$ and so the sequence $\mathbf{l}_1, \ldots, \mathbf{l}_n$ is linearly dependent.
Of course, you must somehow justify the statement that a homogeneous linear system with more variables than equations has a nonzero solution, or equivalently, that any $n+1$ vectors in $\mathbb{F}^n$ are linearly dependent. "Classically" if you want to avoid the swapping-in method of Axler you might do this by row reduction: show that doing row operations doesn't change the solution set of a linear system, then show that a system of this shape can be transformed by row ops to a form where it "obviously" has nonzero solutions. But really I don't agree that this result is "so simple" - it's a non-trivial theorem and so you should expect the proof to have non-trivial ideas in it.