What are the matricization and vectorization of tensor products?

497 Views Asked by At

I'm trying to understand the concepts of matricization (matrix unfolding) and vectorization of tensor products. In the past, I've only dealt with tensor products of infinite-dimensional Banach and Hilbert spaces and hence my my view on tensor products differs from most sources introducing the aforementioned concepts. So, I would be really thankful if someone could explain to me how they fit into my understanding.

If $E_i$ is a $\mathbb R$-vector space, I'm defining $$(x_1\otimes x_2)(B):=B(x_1,x_2)\;\;\;\text{for }B\in\mathcal B(E_1\times E_2)\text{ and }x_i\in E_i,$$ where $\mathcal B(E_1\times E_2)$ is the space of bilinear forms on $E_1\times E_2$, and $$E_1\otimes E_2:=\operatorname{span}\{x_1\otimes x_2:E_i\in E_i\}\subseteq{\mathcal B(E_1\times E_2)}^\ast.$$ If $B_i$ is a basis of $E_i$, then $\{e_1\otimes e_2:e_i\in B_i\}$ is a basis of $E_1\otimes E_2$ and hence, if $\dim E_i\in\mathbb N$, then $\dim(E_1\otimes E_2)=\dim E_1\dim E_2$.

If $F_i$ is a $\mathbb R$-vector space and $A_i:E_i\to F_i$ is linear, the linearization $A_1\otimes A_2$ of $$E_1\times E_2\ni(x_1,x_2)\mapsto A_1x_1\otimes A_2x_2\tag1$$ is a linear operator from $E_1\otimes E_2$ to $F_1\otimes F_2$.

$E_1^\ast\otimes E_2$ is naturally embedded into $\mathcal L(E_1,E_2)$, $$(\varphi\otimes y)(x)=\varphi(x)y\;\;\;\text{for all }(\varphi,y)\in E_1^\ast\times Y.\tag2$$

If $H_i$ is a pre-$\mathbb R$-Hilbert space, there is a unique inner product $\langle\;\cdot\;,\;\cdot\;\rangle_{H_1\otimes H_2}$ on $H_1\otimes H_2$ with $$\langle x_1\otimes y_1,x_2\otimes y_2\rangle_{H_1\otimes H_2}=\langle x_1,x_2\rangle_{H_1}\langle y_1,y_2\rangle_{H_2}\tag3$$ for all $(x_1,y_1),(x_2,y_2)\in H_1\times H_2$.

Now, if $d_i:=\dim H_i\in\mathbb N$, it is clear to me that we may fix orthonormal bases $(e_1,\ldots,e_{d_1})$ and $(f_1,\ldots,f_{d_2})$ of $H_1$ and $H_2$, respectively, and denote \begin{align}x_j&:=\langle x,e_j\rangle_{H_1},\\y_k&:=\langle y,f_k\rangle_{H_2}\end{align} for $x\in E_1,y\in E_2$ and $j\in\{1,\ldots,d_1\},k\in\{1,\ldots,d_2\}$. Now, clearly, $H_i\cong\mathbb R^{d_i}$ and I guess it's assumed that $H_i=\mathbb R^{d_i}$ and $(e_1,\ldots,e_{d_1})$ and $(f_1,\ldots,f_{d_2})$ are the standard bases of $\mathbb R^{d_1}$ and $\mathbb R^{d_2}$, respectively.

How precisely are matricization and vectorization now defined? I'm really trying to understand how these things fit into the more abstract setting described before.

Remark: It's clear to me that if $A\in\mathbb R^{d_2\times d_1}$, then we may treat $A$ as a linear operator from $\mathbb R^{d_1}$ to $\mathbb R^{d_2}$ which is the one identified with $$\sum_{i=1}^{d_1}e_i\otimes Ae_i\tag4$$ (noting that $Ae_i$ is the $i$th column of $A$.)

EDIT: If the linear operators $A_i:H_i\to F_i$ can be identified with \begin{align}A_1&=\sum_{i=1}^{d_1}e_i\otimes u_i,\\A_2&=\sum_{i=1}^{d_2}f_i\otimes v_i\tag5\end{align} for some $u_i,v_i$ and $x\in H_1,y\in H_2$, then $$(A_1\otimes A_2)(x\otimes y)=A_1(x)\otimes A_2(y)=\sum_{i=1}^{d_1}\langle x,e_i\rangle_{H_1}u_i\otimes\sum_{i=1}^{d_2}\langle y,f_i\rangle_{H_2}v_i\tag6.$$ Now we can identify $x,y$ with $$\begin{pmatrix}\langle x,e_1\rangle_{H_1}\\\vdots\\\langle x,e_{d_1}\rangle_{H_1}\end{pmatrix},\begin{pmatrix}\langle x,e_1\rangle_{H_2}\\\vdots\\\langle x,e_{d_1}\rangle_{H_2}\tag7\end{pmatrix}$$ and I think we can identify $A_1\otimes A_2$ with some kind of matrix. And maybe this can be generalized to the completion of the tensor products with respect to the projective norm (the operators $A_1$ and $A_2$ correspond to nuclear / trace class operators then).