How do one view a linear transformation as a (1,1) Tensor?

1k Views Asked by At

I'm relatively new in tensor space theory, and while reading some materials i've came across authors describing a inner product as a $(0,2)$ tensor. I'm not sure why it is, but i think if i write a map $f$ as $$f: P \times P \rightarrow \mathbb{R} \\(p,q) \mapsto \int_{-T}^{T}p(x)q(x) \mathrm{d}x$$ This clearly defines a inner product , and here i'm taking two elements from $P$ which the integral eats and spits out something in $R$ , so it's like a $(0,2)$ tensor, can i think like this ?

But i can't picture it for a linear transform as a (1,1) tensor. For a linear map $T$ defined between two finite dimensional vector spaces i.e $$T:V \rightarrow W$$How is it tensor ? because the target is in $W$ which is not in $\mathbb{R}$ and by definition of tensor it eats $r$ copies of $V^{*}$ and $s$ copies of $V$ and spits out a real number . Also what happens if T is a endomorphism ? i'm having hard time imagining it.

2

There are 2 best solutions below

1
On BEST ANSWER

Disclaimer: I'm mostly used to dealing with tensors in finite-dimensional spaces (general relativity), but I see no reason the same shouldn't apply here. Read with caution, however, as conventions might differ.

When you contract tensors, one covariant order cancels one contravariant order. We can use this to study what kind of tensors we are dealing with.

(The components of) an inner product is a $(0,2)$ tensor because it eats (the components of) two vectors (i.e. $(1,0)$ tensors) and gives back a scalar (i.e. a $(0,0)$ tensor). You start with $(0,2)$, and each $(1,0)$ you feed it subtracts $1$ from the covariant order of the inner product. Your thinking on this is basically the same, just using different words.

(The components of) a linear transformation eats (the components of) one vector and gives back (the components of) a vector. In other words, whatever kind of tensor you want the linear transformation to be, once you feed it a $(1,0)$ tensor, you ought to be left with $(1,0)$. That means it must have started as $(1,1)$.

However, this applies to endomorphisms. Once you have a general linear transformation between two different spaces, a linear transformation becomes a $(0,1)$ tensor over one space and a $(1,0)$ tensor over the other, simultaneously.


What I mean by "the components of" is that vectors, inner products and linear transformations by themselves are invariant, and thus all $(0,0)$ tensors. However, in order to do calculations, we often describe them in terms of a basis (when working with functional spaces, thinking of a choice of units in this regard isn't too far off; it's a special case). So the actual expressions you get (which correspond to the components of matrices in the finite dimensional case) will depend on what basis you are using, and this makes them covariant and contravariant.

1
On

The linear map $T:V\to W$ belongs to the tensor space $W\otimes V^*$. As that is not a product of $V$ and $V^*$, it is not really appropriate to call it an $(1,1)$ tensor. That would apply to linear maps $T:V\to V$ that belong to $V\otimes V^*=T^{(1,1)}V$.

The simple tensor products $a\otimes \beta\in W\otimes V^*$ get identified with the rank-1 linear maps $v\mapsto \beta(v)a$. With any biorthogonal pair of bases $e_1,...e_m$ and $\theta^1,..,\theta^m$ of $V$ and $V^*$ the tensor to a general linear map $T$ is $\sum_{k=1}^m T(e_k)\otimes \theta^k$.