Consider a set $S$ of linearly independent vectors in $\mathbb{R}^n$. I want to show that not only I can extend $S$ into a basis of $\mathbb{R}^n$, but I can always do so by adding unit vectors of the form $(0,\ldots,0,1,0,\ldots,0)$.
Intuitively, this property seems clear in two or three dimensions. But how can I prove it in general?
Something more general is true: Let $S$ be an linearly independent subset of a vector space $V$ (of say finite dimension over a field $k$), and $B$ is a basis of $V$, then there exists a subset $A\subset B$ such that $S\cup A$ is a basis of $V$.
We do induction on $\dim V-|S|$. When this is $0$, $S$ is already a basis, hence we may choose $A=\emptyset$.
As for the inductive step, since $\dim V-|S|>0$, the linear span of $S$ is not the entire space $V$, in particular there exists $e\in B$ such that $e$ is not a linear combination of vectors from $S$, therefore $B\cup\{e\}$ is linearly independent. Now by induction hypothesis, we can choose $A'\subset B$ such that $B\cup\{e\}\cup A'$ is a basis of $V$, hence we may let $A:=A'\cup\{e\}$ to finish the business.
With the help of Zorn's lemma, we can actually elimiate the finiteness of dimension in the proposition, but the proof becomes less constructive and instructive.
This also produces an algorithm: Given $S$ and $B=\{e_1, \cdots, e_n\}$, for each $e_i$, test whether it is in the linear span of $S$, if so continue, otherwise update $S:=S\cup\{e_i\}$, and eventually you'll get $S$ to be a basis of $V$.