Trying to understand the Isomorphism between $T^1_1(V)$ and $End(V)$

259 Views Asked by At

Let $V$ be an n dimensional vector space.

Let $T_1^1$ be the set of bilinear functions $F: V^* \times V \rightarrow \mathbb{R}$ and $End(V)$ be all the set of all linear functions $A: V \rightarrow V$. Suppose that we create an orthonormal basis $(e_1,...e_n)$ for $V$ and that we denote the dual basis on $V^*$ by $(E_1,....E_n)$.

Then we can write every $v \in V$ as $v=a_1e_1+...+a_ne_n$ and we can write every $w \in V^*$ as $w = b_1E_1+....+b_nE_n$ Since $F$ is a bilinear function we can write it as a matrix.. How do we write the matrix representation of this bilnear function??

In general, this ismorphism is confusing to me for some reason. Can somebody help me understand it??

Also, say we are in the vector space $\mathbb{R}^2$ and take it's dual space.. I'm imagining vectors coming out of $\mathbb{R}^2$ as blue vectors starting at the origin, and then i'm imagining it's dual space as red vectors coming out of the origin. How does the red vector act on the blue vector if they are at the same point in the plane? A dot product? IDK I know this question is sort of vague and these are just some thoughts but I hope somebody out there has something to say!! Usually I get some good answers to these soft questions that end up being quite useful :P.

2

There are 2 best solutions below

8
On

It is well known that taking $f\otimes v\mapsto L_{f\otimes v}$, where $$L_{f\otimes v}:V\to V,$$ is defined by $$L_{f\otimes v}(w)=f(w)v,$$ gives us $T^1_1V\cong{\rm End}V$ of vector spaces, which details aren't difficult.

1
On

By bilinearity you have

$$F(w,v) = F\left(\sum_i b_i E_i, \sum_j a_j e_j\right) = \sum_{i,j} b_i a_j F(E_i, e_j),$$

hence the matrix representing $F$ has entries $M_{ij} = F(E_i, e_j)$. (Notice that the identity matrix represents precisely the "standard" bilinear map $(w,v) \mapsto w(v)$.)

If you think of the matrix $M_{ij}$ as representing an endomorphism of $V$, it would be the endomorphism $$v \mapsto \left(\sum_j a_j F(E_i, e_j)\right)_{i=1}^n = \left(F\left(E_i, \sum_j a_j e_j\right)\right)_{i=1}^n = (F(E_i, v))_{i=1}^n.$$

For a basis-free description, see the answer of janmarqz, which uses the identifications

$$ \mathrm{Bil}(V^*,V; \mathbb{R}) \cong \mathrm{Hom}(V^* \otimes V , \mathbb{R}) \cong (V^* \otimes V)^* \cong V^{**} \otimes V^* \cong V \otimes V^*.$$

Edit: A few remarks following from the discussion below. Everything is for finite-dimensional vector spaces (since in the infinite case, everything breaks down). I can give you the following reason why the isomorphism $T^1_1(V) \cong \mathrm{End}(V)$ is "more special" than an isomorphism to $T^2(V)$ or $T_2(V)$: for all finite-dimensional vector spaces $V$ and $W$, there are canonical isomorphisms $$ \mathrm{Bil}(W^*,V; \mathbb{R}) \cong \mathrm{Hom}(W^* \otimes V , \mathbb{R}) \cong (W^* \otimes V)^* \cong W^{**} \otimes V^* \cong W \otimes V^* \cong \mathrm{Hom}(W,V).$$

You can write them all down (at least in one direction) without choosing a basis. This is not true for $\mathrm{Bil}(W,V; \mathbb{R}) \cong \mathrm{Hom}(W,V)$, for instance: there you need to choose a basis. Now, for vector spaces this might not make a huge difference, but once you work in a different setting (for instance, in representation theory, where only equivariant maps are allowed), then these isomorphisms still hold, whereas the ones obtained choosing bases do not work anymore. The bottom line is that one should just be aware that canonical isomorphisms, whenever they are given, are usually better than choosing a basis, because they might work also in more general settings.

As for blue and red vectors: yes, given a finite-dimensional vector space $V$, you can choose a basis (which is equivalent to giving an isomorphism $V \to \mathbb{R}^n$), endow $V^*$ with the dual basis (which gives an isomorphism $V^* \to \mathbb{R}^n$), and then represent both bases on $\mathbb{R}^n$, in blue and red, respectively: blue and red vectors will all agree with the standard basis of $\mathbb{R}^n$. However, if you want to keep the same isomorphisms $V \to \mathbb{R}^n$ and $V^* \to \mathbb{R}^n$ but to change the basis of $V$, funny things will happen when you accordingly also change the basis of $V^*$, as the two bases will not vary in the same way and therefore blue and red vectors will not coincide anymore. This is because the isomorphism $V \to \mathbb{R}^n$ depends covariantly on the basis you choose in $V$, whereas the isomorphism $V^* \to \mathbb{R}^n$ depends contravariantly on it. This is all a bit messy, I know. There are probably much better ways to explain it (just try to look it up, for instance here). What I really want to say is the following: if you pick a vector $v \in V$, there is no canonical choice of a "dual covector" in $V^*$. If you choose a basis $e_i$ and take its dual basis $E_i$, then you can map $v=\sum_i a_i e_i$ to $w=\sum_i a_i E_i$ in $V^*$. But if you then take another basis $e_i'$ and its dual basis $E_i'$, and $v$ is now written as $v=\sum_i a_i e_i = \sum_i a_i' e_i'$ in the new basis, you will be disappointed to find out that $w \neq \sum_i a_i' E_i'$.