Irreducible decompositions
Say that a decomposition $f(x,y) = \sum_i U_i(x)V_i(y)$ is irreducible if the $U_i$ are all linearly independent, as are the $V_i$. The rank of a decomposition is the number of terms in the sum.
In a previous question, I established that any two irreducible decompositions $f(x,y)=\sum_i U_i(x)V_i(y) = \sum_i P_i(x)Q_i(y)$ have the same rank. Moreover, the set $\{U_1, \ldots, U_n, P_1, \ldots, P_n\}$ is linearly dependent, as is the set $\{V_1,\ldots,V_n,Q_1,\ldots, Q_n\}$. (Linear independence when writing a function as a sum of functions.)
Question:
I am trying to establish whether/when a stronger result might hold, namely that if the decompositions are irreducible, then the $\{U_i\}$ and the $\{P_i\}$ necessarily span the same space— that each $P_i$ is a linear combination of $U_i$:
If $f(x,y)=\sum_i U_i(x)V_i(y) = \sum_i P_i(x)Q_i(y)$ and both decompositions are irreducible, then $\text{span}(\{U_i\}) = \text{span}(\{P_i\})$.
I know that if $n$ functions $f_i$ are independent, then there exist $n$ points $x_i$ such that the matrix $[f_i(x_j)]$ is invertible. So I can prove this result if there is a set of points $x_i$ such that both $[U_i(x_j)]$ and $[P_i(x_j)]$ are invertible. Or if the statement is false, perhaps there's an easy example of a particular $f$ and two decompositions where the conjecture fails. Alternatively, there might be a way to represent $f(x,y)$ as a decomposition involving both the $U_i$ and $P_i$, then using the minimality property to winnow it down—but I haven't had much luck there. Any help is appreciated.
I also tried defining $$D(\vec{\alpha})\equiv \text{det}([U_i(\alpha_j)]_{i,j}\cdot [P_i(\alpha_j)]_{i,j})$$ which, because of the multiplicative property of determinants, is identically zero unless there exists a collection of $n$ points $\alpha_1,\ldots,\alpha_n$ which simultaneously makes both matrices invertible.
In short:
Just how unique are irreducible decompositions? Are any two irreducible decompositions $\sum_i U_i(x)V_i(y) = \sum_i P_i(x)Q_i(y) $ linearly related to each other with $\text{span}(U_i)=\text{span}(P_i)$, or are there decompositions that are significantly different from one another?
Note: the notations are a bit different from the ones in the question.
Let $k$ be a field, $X$, $Y$ non-void sets. Let $V$ the space of $k$ valued functions on $X$, $W$ the space of $k$-valued functions on $Y$. Then we have an imbedding of $V\otimes W$ into the space of functions on $X\times Y$
$$V\otimes W\to \mathcal{F}(X\times Y)$$
given by $$\sum f_i(x) \otimes g_i(y) \mapsto \sum f_i(x) g_i(y)$$
Now we can forget that $V$, $W$ are spaces of functions. The question is about elements of the space $V\otimes W$.
Now, every element $u \in V\otimes W$ can be written as a finite sum $$u = \sum_{i \in I} v_i \otimes w_i$$
Choosing a basis of the system $w_i$, we may assume that $w_i$ are linearly independent. ( even further, we may also assume that $u_i$ are linearly independent, although we do not need it at this point).
Note that for every linear functional $\phi \colon W\to k$ we have an element $u \cdot (1 \otimes \phi) \in V$
$$u \cdot (1 \otimes \phi) = \sum_{i\in I} v_i \cdot \phi(w_i)$$
We get in this way a linear map from $W^{\star}$ to $V$. If we take a writing of $u$ with $w_i$ linearly independent we see that the image of this map equals the span of $v_i$.
Let us now take a writing of $u$ with both systems $v_i$, $w_i$ linearly independent. We see that the image $$\{ u \cdot (1 \otimes \phi) \ | \ \phi \in W^{\star}\}$$ equals the vector space with basis $v_i$.
Now we see easily the the span and the cardinality, does not depend on the irreducible writing of $u$.
Note: The span on $V$ and $W$ of an element $u \in V\otimes W$ are the smallest subspaces $V'\subset V$, $W'\subset W$ such that $u \in V'\otimes W'$.
$\bf{Added:}$ Why is the above map $V\otimes W \to \mathcal{F}(X\times Y)$ injective? Consider a linearly independent system $v_i$ in $V$ and a linearly independent system $w_j$ in $W$. We have to show that the function $$\sum \alpha_{ij} u_i(x) v_j(y)$$ on $X\times Y$ is the zero function only if all the $\alpha_{ij}$ are $0$.
Fix an $x\in X$. Then we have $$\sum_{j\in J} ( \sum_{i\in I} \alpha_{ij} u_i(x)) v_j(y) = 0$$ for all $y \in Y$. Since the functions $v_j$ are linearly independent, that meant $$\sum_{i\in I} \alpha_{ij} u_i(x)= 0$$ for all $j$. Since this happens for all $x$, we have $$\sum_{i\in I} \alpha_{ij} u_i(\cdot)= 0$$ Now use that $u_i(\cdot)$ are linearly independent.