Prove that $\sum_{j=1}^r a_j\otimes b_j=0\Rightarrow b_j=0$ where $\{a_j\}$ is linearly independent?

303 Views Asked by At

In the book Multilinear Algebra (Werner Greub) there is the following result concerning two $\mathbb K$-vector spaces $E$ and $F$:

Lemma. Let $a_1, \ldots, a_r$ be linearly independent vectors in $E$ and $b_1, \ldots, b_r\in F$, such that

$$\sum_{j=1}^r a_j\otimes b_j=0.$$ Then $b_j=0$ for every $j=1, \ldots, r$.

In my understing, the idea is as follows: since $\{a_1, \ldots, a_r\}$ is linearly independent, one can extend this set to a basis of $E$. In particular, considering the dual basis, one gest linear functionals $a_1^*, \ldots, a_r^*: E\rightarrow \mathbb K$ such that

$$a_i^*(a_j)=\delta_{ij}=\left\{\begin{array}{lcl} 1 & \textrm{if} &i=j\\ 0 & \textrm{if}& i\neq j \end{array}\right..$$ Then the author tells us to define $\Phi: E\times F\longrightarrow \mathbb K$ setting

$$\Phi(x, y)=\displaystyle \sum_{i=1}^r a_i^*(x)f_i(y)$$

where $f_1, \ldots, f_r: F\longrightarrow \mathbb K$ are arbitrary but fixed linear maps. Then $\Phi$ is a bilinear map and, therefore, there is a unique linear map $$\Phi_{\otimes}: E\otimes F\longrightarrow \mathbb K$$ such that $$\Phi_{\otimes}(x\otimes y)=\Phi(x, y).$$ This implies

$$0=\Phi_{\otimes}(0)=\Phi_{\otimes}\left(\sum_{j=1}^r a_j\otimes b_j\right)=\sum_{j=1}^r \Phi_{\otimes}(a_j\otimes b_j)=\sum_{j=1}^r \sum_{i=1}^r a_i^*(a_j)f_i(b_j)=\sum_{j=1}^r f_j(b_j).$$ He concludes the proof stating that since this holds for every $f_1, \ldots, f_r$, then $b_j=0$ for every $j$.

My question is, how does he conclude the last statement?

Thanks.

3

There are 3 best solutions below

1
On

If $b_1\ne0$, extend $b_1$ to a full basis $\mathcal B$ and let $\{g_1,\ldots,g_{\dim F}\}$ be the dual basis of $\mathcal B$. Let $f_1=g_1$ and $f_2=\cdots=f_r=0$. Then $\sum_jf_j(b_j)=g_1(b_1)=1\ne0$, which is a contradiction. Hence $b_1=0$. Likewise, the other $b_j$s are also zero.

1
On

This lemma generalizes the consequence of linear independence to tensor products other than scalar multiplication.

Greub's proof is overly complicated. For an arbitrary linear functional $g\in L(F)$, consider the bilinear map $E\times F\to E$ defined by $(x,y)\mapsto g(y)x$. By the universal property of the tensor product, there is a linear map $h:E\otimes F\to E$ with $h(x\otimes y)=g(y)x$. Now $$0=h(\sum a_i\otimes b_i)=\sum h(a_i\otimes b_i)=\sum g(b_i)a_i$$ By linear independence of the $a_i$, it follows that $g(b_i)=0$ for all $i$. Since $g$ was arbitrary, it follows that $b_i=0$ for all $i$.

The key idea used is that if a vector is zero under all linear functionals, then it must be the zero vector. This is true because you can always choose basis functionals, as you (and another user) did above.

Note also $h=\iota\otimes g:E\otimes F\to E\otimes\Gamma$ (see $\S$ 16 of the book).

0
On

Recently I was also reading the same proof and the same question arise to me. In an attempt to justify that last step, I rewrote the proof as follows:

Proof : Since $a_1,\dots,a_r$ are linearly independent, we can find $r$ linear maps $a_1^*,\dots,a_r^*$ from $E$ to the underlying field, $k$, such that $$a_j^*(a_i) = \delta_{ij}$$ for all $i$ and $j$.

Now, for the sake of contradiction, suppose that $b_j \neq 0$ for some $j$, and let $\phi \colon F \to k$ be a linear map such that $\phi(b_j) \neq 0$. Then, for the bilinear map $g \colon E \times F \to k$ given by $g(x,y) = a_j^*(x)\phi(y)$ there is a linear map $\tilde{g} \colon E \otimes F \to k$ with $\tilde{g}(x \otimes y) = g(x,y) = a_j^*(x)\phi(y)$.

It follows that $$0 = \tilde{g}(0) = \tilde{g} \bigg( \sum_{i=1}^r a_i\otimes b_i \bigg) = \sum_{i=1}^r \tilde{g}(a_i \otimes b_i) = \sum_{i=1}^r a^*_j(a_i)\phi(b_i) = \sum_{i=1}^r \delta_{ij} \phi(b_i) = \phi(b_j),$$ which is a contradiction. Hence, $b_j = 0$ for all $j$. $\square$