Is it possible to define the tensor product of two vectors with respect to a bilinear form?

957 Views Asked by At

Given two vectors $\vec{v},\vec{w} \in \mathbb{R}^n$, and a bilinear form $\mathcal{B}$ represented by an $n \times n$ matrix $B$, we can define the inner product of $\vec{v}$ and $\vec{w}$ with respect to $\mathcal{B}$ as follows: $$\langle \vec{v} , \vec{w} \rangle_{\mathcal{B}} = \vec{v}^T B \vec{w} = \vec{v} (B\vec{w}) = (B^T\vec{v})^T \vec{w}$$

The tensor product of $\vec{v}$ and $\vec{w}$ can be defined as follows: $$\vec{v}\otimes \vec{w} = \vec{v}\vec{w}^T$$ (see, for example, here: https://en.wikipedia.org/wiki/Dyadics#Definitions_and_terminology).

Is there a sensible and consistent way to define a tensor product with respect to $\mathcal{B}$ when $\mathcal{B}$is not the identity bilinear form? If we let $I_n$ be the identity $n \times n$ matrix, and thus the matrix representation of the identity bilinear form, the above equation suggests multiple possible ways to attempt to generalize, although none stand out as being especially sensible to me.

$$v \otimes w = I_n \vec{v} \vec{w}^T = \vec{v} I_n \vec{w}^T = \vec{v}\vec{w}^T I_n = I_n \vec{v} \vec{w}^T I_n=\dots$$

I was thinking maybe $$\vec{v}(B\vec{w})^T=\vec{v} \vec{w}^T B^T \quad \text{or} \quad (B^T \vec{v})\vec{w}^T = B^T \vec{v}\vec{w}^T$$ since they seem like the closest analogies to both definitions above.

I would greatly appreciate your thoughts.

2

There are 2 best solutions below

2
On BEST ANSWER

You can characterise $\vec{v} \otimes \vec{w}$ as the unique $n \times n$ matrix such that $$ \forall \vec{x} \in \mathbb{R}^n, \quad (\vec{v} \otimes \vec{w})\vec{x} = \langle \vec{w}, \vec{x} \rangle \vec{v}, $$ where $\langle \vec{w}, \vec{x} \rangle = \vec{w}^T\vec{x}$ denotes the usual inner product on $\mathbb{R}^n$. From this perspective, then, it might make sense to define $\vec{v} \otimes_\mathcal{B} \vec{w}$ to be the unique $n \times n$ matrix such that $$ \forall \vec{x} \in \mathbb{R}^n, \quad (\vec{v} \otimes_\mathcal{B} \vec{w})\vec{x} = \langle \vec{w}, \vec{x} \rangle_\mathcal{B} \vec{v}, $$ in which case one can easily check that $$ \vec{v} \otimes_\mathcal{B} \vec{w} = \vec{v}\vec{w}^TB = \vec{v} \otimes B^T \vec{w}. $$ Be forewarned, though, that such a construction is not particularly consistent with the general machinery of tensor products of vector spaces in more advanced linear algebra.

0
On

One thing I was considering: if $\vec{v}$ and $\vec{w}$ are members of an orthonormal basis with respect to $\mathcal{B}$, then if we define $$\vec{v} \otimes_{\mathcal{B}} \vec{w}:= B \vec{v} \vec{w}^T B$$ and if we let $A$ be a matrix representing this orthonormal basis with respect to $\mathcal{B}$ (in particular, $\vec{v}$ is the $k$th column of $A$ and $\vec{w}$ is the $m$th column), then if we want to represent $\vec{v} \otimes_{\mathcal{B}}\vec{w}$ in the coordinates defined by the orthonormal basis represented by $A$, we get $$A^T B \vec{v} \vec{w}^T B A= \vec{e}_k \vec{e}_m^T = \vec{e}_k \otimes \vec{e}_m = \vec{e}_k \otimes_{\mathcal{Id}}\vec{e}_m$$ where $\vec{e}_1, \dots, \vec{e}_n$ is the standard Euclidean basis, which is an orthonormal basis with respect to the identity bilinear form $\mathcal{Id}$.

And since $\vec{v}$ is the $k$th orthonormal basis vector for $\mathcal{B}$ and $\vec{w}$ is the $m$th orthonormal basis vector for $\mathcal{B}$, we would want the matrix representation of $\vec{v} \otimes_{\mathcal{B}}\vec{w}$ with respect to the coordinates defined by the orthonormal basis for $\mathcal{B}$ to be the same as the matrix representation of $\vec{e}_k \otimes \vec{e}_m$ with respect to the standard coordinates. The above definition seems like the only which accomplishes that.

However, I still have the major quibble that the matrix $B$ appears twice in this definition, whereas it only appears once in the definition of $\langle \vec{v}, \vec{w} \rangle_{\mathcal{B}}$. This may not be mathematically significant, but it still bothers me. Also it is possible that one of the two $B$'s (even both?) in the above definition should be transposes instead - actually that shouldn't matter because I am only interested in the case when $\mathcal{B}$ induces a quadratic form, and in that case $B^T=B$.

Update: This new identity which I recently learned regarding tensors probably sheds some light on this issue, although I have not yet fully thought through all of the implications.

If $I$ is the identity operator, then it is also a second order tensor, rank (1,1), and we have for $e_1, e_2, \dots, e_n$ the standard basis for $\mathbb{R}^3$ the following: $$I = \sum_{i=1}^n e_i \otimes e_i$$

I thought this result depended upon the choice of coordinates, but it turns out that it doesn't entirely. In particular, we have for any orthonormal basis $u_1, \dots, u_n$ of $\mathbb{R}^n$, orthonormal with respect to the rank (0,2) tensor, i.e. bilinear form, which corresponds naturally to $I$, that $$I= \sum_{i=1}^n u_i \otimes u_i$$ So my concern about making tensor products "compatible" with symmetric bilinear forms may have been ill-founded.

In particular, I conjecture the following: given any rank (0,2) tensor (i.e. a bilinear form) $\tilde{B}$, then for every basis of $\mathbb{R}^n$ which is orthonormal with respect to $\tilde{B}$, we have that for the rank (1,1) tensor $B$ corresponding to $\tilde{B}$ that: $$B = \sum_{i=1}^n b_i \otimes b_i $$ In particular, I guess that we would have that $b_i \wedge_{\tilde{B}} b_j$ will have the same coordinate representation as $e_i \wedge_{id} e_j$. Such a correspondence would make it much easier to combine geometric and tensor algebra formalisms, which was my original motivation for asking this question.