In the book Multilinear Algebra (Werner Greub) there is the following result concerning two $\mathbb K$-vector spaces $E$ and $F$:
Lemma. Let $a_1, \ldots, a_r$ be linearly independent vectors in $E$ and $b_1, \ldots, b_r\in F$, such that
$$\sum_{j=1}^r a_j\otimes b_j=0.$$ Then $b_j=0$ for every $j=1, \ldots, r$.
In my understing, the idea is as follows: since $\{a_1, \ldots, a_r\}$ is linearly independent, one can extend this set to a basis of $E$. In particular, considering the dual basis, one gest linear functionals $a_1^*, \ldots, a_r^*: E\rightarrow \mathbb K$ such that
$$a_i^*(a_j)=\delta_{ij}=\left\{\begin{array}{lcl} 1 & \textrm{if} &i=j\\ 0 & \textrm{if}& i\neq j \end{array}\right..$$ Then the author tells us to define $\Phi: E\times F\longrightarrow \mathbb K$ setting
$$\Phi(x, y)=\displaystyle \sum_{i=1}^r a_i^*(x)f_i(y)$$
where $f_1, \ldots, f_r: F\longrightarrow \mathbb K$ are arbitrary but fixed linear maps. Then $\Phi$ is a bilinear map and, therefore, there is a unique linear map $$\Phi_{\otimes}: E\otimes F\longrightarrow \mathbb K$$ such that $$\Phi_{\otimes}(x\otimes y)=\Phi(x, y).$$ This implies
$$0=\Phi_{\otimes}(0)=\Phi_{\otimes}\left(\sum_{j=1}^r a_j\otimes b_j\right)=\sum_{j=1}^r \Phi_{\otimes}(a_j\otimes b_j)=\sum_{j=1}^r \sum_{i=1}^r a_i^*(a_j)f_i(b_j)=\sum_{j=1}^r f_j(b_j).$$ He concludes the proof stating that since this holds for every $f_1, \ldots, f_r$, then $b_j=0$ for every $j$.
My question is, how does he conclude the last statement?
Thanks.
If $b_1\ne0$, extend $b_1$ to a full basis $\mathcal B$ and let $\{g_1,\ldots,g_{\dim F}\}$ be the dual basis of $\mathcal B$. Let $f_1=g_1$ and $f_2=\cdots=f_r=0$. Then $\sum_jf_j(b_j)=g_1(b_1)=1\ne0$, which is a contradiction. Hence $b_1=0$. Likewise, the other $b_j$s are also zero.