Why does the definition of tensor as multilinear map always use the same vector space and its dual space?

620 Views Asked by At

The definition of a Tensor as a multilinear map is usually given as

$$T:V^* \times V^* \times\ ... \times\ V \times V \to R$$

Where $V$ is the vector space and $V^*$ is its dual space and $R$ is the real field. Why can't the tensor be a map from different vector spaces to different vector space? like

$$T:V^* \times W^* \times\ ... \times\ V \times W \to Z$$

Can the mapping: $$V^* \times W^* \times\ ... \times\ V \times W \to Z$$

Be represented as a tensor?

For ex: I know that a tensor product can be defined as a bilinear map: $$\otimes:V \times W \rightarrow V {\otimes} W$$ and by the universal property any bilinear map: $$\alpha:V \times W \rightarrow Z$$ can be given uniquely as linear map $$u:V {\otimes} W \rightarrow Z$$

but here, is the linear map $u$ that is acting on memebers of $V \otimes W$ (which are type (2,0) tensors i guess?) a tensor too?

How is the definition of tensor as a multilinear map from differnt vector spaces, like $$T: V \times W \rightarrow R$$ (a type (0,2) tensor I guess?)

or $$T: W^* \times V \times W \rightarrow R$$ (a type (1,2) tensor i guess?)

given? Is it possible to give the define tensor as a multilinear map that takes inputs form different vector spaces and their dual spaces?

I am asking because, the definition of tensor as a multilinear map is always given as map from same,let it be V, and its dual, V*. $$T: V^{*^p} \times V^q \rightarrow R$$ and never using differnt vector spaces and their duals like $V$, $W$, $V^*$,$W^*$. Can tensor take in elements from different vector spaces?(I know generally multilinear maps can be defined like this and also that tensor can be defined as an element of the tensor product of different vector spaces.)

1

There are 1 best solutions below

2
On BEST ANSWER

This is from the bottom of the Using Tensor Products section.

Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above.

From this, it sounds like it's simply convention with Mathematics. Note that in the finite dimensional case, it "doesn't matter" a certain amount as for any $V$, a $k$-vector space, we have that $V\cong k^n$ for some $n$

This means when we look at $V\times W\times W^*$, we can say that $V\cong\mathbb{R}^n$, and $W\cong\mathbb{R}^m$, so this is just $\mathbb{R}^{n+m}\times (\mathbb{R}^m)^*$.

I get the impression one reason to define them over the same vector space is so that you can refer to a tensor solely by the $(p,q)$ notation. I believe this stops working in the cases you mention. One way to see that is that it's not really "consistent" anymore. By this, I mean that if: $$T:V\times W\times W^*\to\mathbb{R}$$ is a $(1,2)$ tensor, this remains true if $V\cong W$. But what if $V\cong\mathbb{R}^2$, and $W\cong\mathbb{R}^3$? Then, this becomes a $(3,5)$ tensor, which is undesirable. This is because it means that the 2 covariant indices somehow became 5 covariant indices, without changing any of the underlying mathematical structure, only our "perspective" on it. This in turn means that we were unjustified in treating the two covariant indices as being the "same" before, so the $(p,q)$ notation was flawed.