I've been trying to wrap my head around tensors for some time now, and I have figured out some of the ideas. The definition of rank $(0, k)$ as $k$-multilinear maps over a vector space makes sense to me, and it makes sense that such an object would be useful. The characterization of some tensors as linear maps also makes sense to me. It's tensors of rank $(k, 0)$ that I really just don't get.
Let me explain my problem a bit more specifically, using examples from physics: namely the metric tensor, the inertia tensor, and the electromagnetic field strength tensor. To me, the metric tensor makes perfect sense as a generalization of the inner product. We need some tensor to tell us how aligned vectors are on a general manifold, so we'll need a tensor that takes in two vectors and spits out a number: obviously, we'll need a rank $(0, 2)$ tensor. The inertia tensor is also very intuitive to me. We know that the angular momentum of an object is linearly related to its angular velocity, so we'll need a linear transformation to turn the angular velocity vector into an angular momentum: we'll need a rank $(1, 1)$ tensor.
The electromagnetic field strength tensor, on the other hand, carries none of the same necessity or intuitiveness. Sure, the electric and magnetic field should transform in the right way under Lorentz transformations, but why on earth is the rank $(2, 0)$ tensor the right way to do that? Why is it natural to introduce a rank $(2, 0)$ tensor in this context?
Upon reflection, I realized I don't actually get what a rank $(k, 0)$ tensor even is. According to the definitions I've seen, such a tensor can be defined in one of three ways:
- A set of components that transform in the right way
- A multilinear map of covectors
- An element of the tensor product space $F(V \times \cdots \times V)/\sim$.
The first definition is kind of like defining a linear transformation as a matrix, or a vector as a tuple of components, and that really isn't what made those concepts click for me. I suppose I sort of view vectors as "linear objects," and linear transformations as structure-preserving maps. The second definition makes more mathematical sense I suppose, but it's kind of like defining a vector as a linear map of covectors; I would never have thought that vectors were natural mathematical objects if they were defined that way. As for the third definition, it feels somewhat unmotivated. Sure, it makes sense and is rigorous, but it feels much more like a way of abstracting tensors for proofs than an intuition. Much like how a real number is often defined as an element of a complete ordered field, rather than... you know, a real number.
So that's my question: what is a "natural" interpretation of these kinds of tensors? What I'm looking for is not a definition, I've been acquainted with quite a few of them. I'm looking for an intuition, that explains why they are important and useful.