Assume a tall and slim tensor with 3 dimensions: $\mathbf{T}\in \mathbb{R}^{a \times b \times c}$ where $a=10000$, $b=5000$, and $c=5$. The tensor is very sparsely filled.
I want to do tensor completion by interpolating from known sparse values and with this I turn to simple canonical decomposition:
$\mathbf{T}_{xyz} = \sum_{k=1}^f \mathbf{A}_{xk} \cdot \mathbf{B}_{yk} \cdot \mathbf{C}_{zk}$,
where we have component tensors $\mathbf{A} \in \mathbb{R} ^{a \times f}, \mathbf{B}\in \mathbb{R} ^{b \times f}, \mathbf{C}\in \mathbb{R} ^{c \times f}$. Then, I have an objective function which iterates over available data to fill in the component tensors.
As a novice in tensor factorization, my question is: Say, if $f=20$, I will have a component matrix $\mathbf{C}\in \mathbb{R} ^{5 \times 20}$. Then, thinking of (and getting puzzled with) matrix rank and SVD, this matrix can have a max. rank of 5, so what is the use of using $f=20$ ? In other words, is it of any use to have a larger $f$ than one of the dimensions in the tensor $\mathbf{T}$?
Thanks for your comments ...
The multilinear ranks never exceed the dimensions, but tensor rank may. In your case (i.e., if we assume that you have a random sparse tensor), the multilinear rank is ($\min(a,f)$,$\min(a,f)$,$\min(a,f)$)=(20,20,5).
The practical way to compute canonical decomposition: Since $a$, $b$, and $f$ are not too large, you can first compute the Higher-Order Singular Value Decomposition of $T$(the sparsity can be ignored). Then compute canonical decomposition for the core tensor (it has dimensions $20\times 20\times 5$). And, finally, the canonical decomposition of the initial tensor can be recovered from the canonical decomposition of the core tensor.