This proof was presented to me:
The set of vectors $B = \{v_1,\ldots,v_n\}$ is linearly independent (is a base for $V$); the only solution of the equation $$\lambda v_1+\ldots+\lambda v_n=0, \lambda_i \in K$$
is $$\lambda_1=\ldots=\lambda_n=0$$ Proof by contradiction: Suppose there is a linearly dependent subset of $m$ vectors, with $m<n$, then there are $\alpha_1,\ldots,\alpha_n$ not all zero, such that: $$\alpha_1 v_1+\ldots+\alpha_n v_n=0, \alpha_i \in K$$ It follows that we could find a solution for $\lambda_{i_k}=\alpha_k$, for $1<k<n$ and for the rest of coefficients, $\lambda_j=0$. Then, if a linearly dependent subset of $B$ exists, $B$ cannot be linearly independent. Therefore, any non-empty subset of $B$ is linearly independent.
- What's the meaning of $\lambda_{i_k}=\alpha_k$ and $\lambda_j$ in the proof ?
- Is it necessary to justify why B is linearly independent ?
- Is this a valid proof ?
The proof, as reported, is not really well done, albeit correct.
First, a set $\{v_1,v_2,\dots,v_n\}$ is linearly independent if and only if the only solution to $\lambda_1v_1+\lambda_2v_2+\dots+\lambda_nv_n=0$ is $\lambda_1=\lambda_2=\dots=\lambda_n=0$.
Choose a subset $\{v_{i_1},v_{i_2},\dots,v_{i_k}\}$ of $B$ and suppose $$ \alpha_1v_{i_1}+\alpha_2v_{i_2}+\dots+\alpha_kv_{i_k}=0 \tag{*} $$ For $1\le i_j\le n$ with $1\le j \le k$ , define $$ \lambda_i= \begin{cases} \alpha_{i_j} & \text{if }i=i_j \\[4px] 0 & \text{otherwise} \end{cases} $$ Then clearly $\lambda_1v_1+\lambda_2v_2+\dots+\lambda_nv_n=0$, which implies $\lambda_1=\lambda_2=\dots=\lambda_n=0$ and, in particular, $\alpha_{i_1}=\alpha_{i_2}=\dots=\alpha_{i_k}=0$. QED
Basically, you rewrite the linear combination (*) by inserting the “missing vectors” with zero coefficient. This doesn't change the result of the linear combination (which is zero by assumption), but allows to apply the assumption that the full set is linearly independent.