How does one show that coev is independent of the choice of basis?

89 Views Asked by At

Given $\operatorname{coev} \colon k \rightarrow V \otimes V^{*}$ with the mapping $1 \mapsto \sum_{i} b_{i} \otimes b^{*}_{i}$ for left dual of $V \in \mathbf{vect}_{k}$.

How can I conclude from this that $\operatorname{coev}$ is independent of the choice of basis?

1

There are 1 best solutions below

4
On BEST ANSWER

We have for every two $$-vector spaces $V$ and $W$ a unique linear map $$ Φ \colon V^* ⊗ W \to \operatorname{Hom}(V, W) $$ such that $$ Φ(v^* ⊗ w)(v) = v^*(v) w $$ for all $v^* ∈ V^*$, $w ∈ W$, $v ∈ V$.

Proposition. If either $V$ or $W$ is finite-dimensional, then $Φ$ is an isomorphism. (If both $V$ and $W$ are infinite-dimensional, then $Φ$ will still be injective, but not surjective.)

Proof sketch. Suppose that $W$ is finite-dimensional with basis $w_1, \dotsc, w_m$. Every map $f$ from $V$ to $W$ is then of the form $$ f(v) = f_1(v) w_1 + \dotsb + f_m(v) w_m $$ for suitable coefficient maps $f_1, \dotsc, f_m$ from $V$ to $$. If $f$ is linear, then each coefficient map $f_1, \dotsc, f_m$ is again linear, and we can consider the element $Ψ(f) := \sum_{i = 1}^m f_i ⊗ w_i$ of $V^* ⊗ W$. We have constructed in this way a map $Ψ$ from $\operatorname{Hom}(V, W)$ to $V^* ⊗ W$. This map $Ψ$ is linear, and the maps $Φ$ are $Ψ$ are mutually inverse.

Suppose now that $V$ is finite-dimensional with basis $v_1, \dotsc, v_n$, and let $v^*_1, \dotsc, v^*_n$ be the corresponding dual basis of $V^*$. Every linear map $f$ from $V$ to $W$ is uniquely determined by its values $f_1(v), \dotsc, f_n(v)$. We consider therefore the element $Ψ(f) ≔ \sum_{i = 1}^n v_i^* ⊗ f(v_i)$ of $V^* ⊗ W$. This map $Ψ$ is linear, and the maps $Φ$ are $Ψ$ are mutually inverse. $∎$

We have for every $$-vector space $V$ a very special element in $\operatorname{End}(V)$, namely the identity map of $V$. If $V$ is finite-dimensional, then we have also the isomorphism of vector spaces $$ Φ \colon V^* ⊗ V \to \operatorname{End}(V) $$ given by $$ Φ(v^* ⊗ v)(w) = v^*(v) w \,. $$ Under this isomorphism, the element $\mathrm{id}_V$ corresponds to a special element $t$ of $V^* ⊗ V$. Note that neither the element $\mathrm{id}_V$ of $\operatorname{End}(V)$ nor the isomorphism $Φ$ depend on a basis of $V$, so neither does $t$.

We can, however, express $t$ in terms of a basis $v_1, \dotsc, v_n$ of $V$. Indeed, we can see from the above proof sketch that $t$ must be given by $$ t = \sum_{i = 1}^n v_i^* ⊗ \mathrm{id}_V(v_i) = \sum_{i = 1}^n v_i^* ⊗ v_i \,, $$ where $v^*_1, \dotsc, v^*_n$ denotes the basis of $V^*$ that is dual to the basis $v_1, \dotsc, v_n$ of $V$.


We can also give another, more computational proof, based on the following statement from linear algebra:

Let $V$ and $W$ be two finite-dimensional $$-vector spaces with ordered bases $\mathcal{B}$ and $\mathcal{C}$ respectively. Let $\mathcal{B}^*$ and $\mathcal{C}^*$ be resulting dual bases of $V^*$ and $W^*$ respectively.

Let $f$ be a linear map from $V$ to $W$, and let $f^*$ be the induced dual map from $W^*$ to $V^*$. Let $A$ be the representing matrix of $f$ from the basis $\mathcal{B}$ to the basis $\mathcal{C}$.

Then the representing matrix of $f^*$ from the basis $\mathcal{C}^*$ to the basis $\mathcal{B}^*$ is given by the transpose of $A$.

Let now $\mathcal{B} = (b_1, \dotsc, b_n)$ and $\mathcal{C} = (c_1, \dotsc, c_n)$ be two bases of $V$. We can express the elements of the bases $\mathcal{C}$ in terms of the basis $\mathcal{B}$ via coefficients $a_{ij}$, such that $$ c_j = \sum_{i = 1}^n a_{ij} b_i $$ for every index $j = 1, \dotsc, n$. The matrix $A = (a_{ij})_{ij}$ is the representing matrix of $\mathrm{id}_V$ from the basis $\mathcal{C}$ to the basis $\mathcal{B}$. It follows that the transpose of $A$ is the representing matrix of $\mathrm{id}_V^* = \mathrm{id}_{V^*}$ from the basis $\mathcal{B}^*$ to the basis $\mathcal{C}^*$. In other words, we have $$ b_j^* = \sum_{i = 1}^n a_{ji} c_i^* $$ for every index $j = 1, \dotsc, n$. (This can also be checked by hand, without the theory of representing matrices, by evaluating both sides on the basis elements belonging to $\mathcal{C}$.) We can now express both $\sum_{j = 1}^n b_j ⊗ b_j^*$ and $\sum_{j = 1}^n c_j ⊗ c_j^*$ with respect to the induced basis $\mathcal{B} ⊗ \mathcal{C}^*$ of $V$. We find that $$ \sum_{j = 1}^n b_j ⊗ b_j^* = \sum_{j = 1}^n b_j ⊗ \sum_{i = 1}^n a_{ji} c_i^* = \sum_{i, j = 1}^n a_{ji} b_j ⊗ c_i^* = \sum_{i, j = 1}^n a_{ij} b_i ⊗ c_j^* \,, $$ and similarly $$ \sum_{i = 1}^n c_j ⊗ c_j^* = \sum_{j = 1}^n \sum_{i = 1}^n a_{ij} b_i ⊗ c_j^* = \sum_{i, j = 1}^n a_{ij} b_i ⊗ c_j^* \,. $$ Together, this shows that $$ \sum_{j = 1}^n b_j ⊗ b_j^* = \sum_{i, j = 1}^n a_{ij} b_i ⊗ c_j^* = \sum_{j = 1}^n c_j ⊗ c_j^* \,. $$