I am trying to understand the definition of a tensor. The text-book definition of a tensor is a map from $ V \otimes V \otimes V ... \text{(p times)} \otimes V^* \otimes V^* \otimes V^* ... \text{(q times)}\rightarrow \mathbb{F} $; Here $V$ is a vector space, $V^*$ its dual and $\mathbb{F}$ the underlying scalar field. Further it is a multilinear map in each of the variables. In short, a $T^p_q$ tensor takes $p$ vectors from $V$ and $q$ vectors from $V^*$ and maps it to the underlying scalar field. Upon choosing a basis for each of the $V$ and $V^*$, the tensor takes the form of a multidimensional array (at-least for finite-dimensional V). The rank of the tensor is simply $p+q$. What I really don't understand here is
- What does it mean when someone says a rank-3 tensor acts on rank-2 tensor to produce a rank-1 tensor and rank-3 tensor acts on rank-1 tensor to produce a rank-2 tensor. How are the domain and ranges of the tensor defined in this case? Is it simply that $ V \otimes V \otimes V ... \text{(p times)}$ forms the domain and $ V^* \otimes V^* \otimes V^* ... \text{(q times)}$ forms the range when viewing Tensor as an operator
- Since $V$ and $V^*$ have the same dimensions (for finite dimensional vector space $V$), is it reasonable to assume that tensors can never have a rectangular matrix (or a rectangular multidimensional array) representation?