Is Multi-dimensional arrays always enough information to describe a Tensor?

29 Views Asked by At

Given a vector space V with finite independent basis Q containing D elements. $V^*$ denotes the dual space, and $V^*$ has also independent basis E. Linear combination of elements in Q can represent any element v in V, and X represent the component coefficient needed to represent v.

Here is what I have in mind. A Tensor is a multi-linear map, that takes in p covectors and q vectors, and spits out a real number. Riesz Representation theorem says all linear forms that map a vector to a number is the action of taking the dot product with a given vector. Can I infer from this that all information of a (p,q) tensors can be described using a muti-dimensional arrays?

Feeding the tensor with p+q-1 inputs(covector, and vectors) makes it a linear form that takes X(component coefficients of a vector v) to a real number. Rewrite this as a dot product with a vector z. Now feeding the tensor with p+q-2 inputs makes the tensor D linear forms that maps a vector to a real number(each component of z). Continuing inductively, any tensor can be broken down into p+q layers of summation, and hence a p+q dimensional arrays (as long as Tensor is multi-linear map)?

By the way, is dot product a (1,1) tensor? Is multiplication by a scalar a (1,1) tensor with the matrix being the identity matrix times the scalar?