I read a paper in which the authors seem to have a simplified definition of convergence in distribution of random variables in a product space. The paper itself is very specific, so I can link it but i will rewrite the problem in a more universal notation here (https://arxiv.org/abs/1904.02585 , the important results here are Lemma 2.8 and Lemma B.2 in the appendix):
Let $S$ be a separable, complete space and T := $S \times S$ . Furthermore, there are two sequences $(X_n^1)_{n \in \mathbb{N}}$ $(X_n^2)_{n \in \mathbb{N}}$ of random variables taking values in $S$ as well as two random variables $X^1$ and $X^2$ in $S$. Now Lemma 2.8 shows that for any continuous and bounded functions $f_1,f_2 : S \to \mathbb{R}$ that \begin{equation} \label{one} \mathbb{E} \left[f_1(X_n^1)f_2(X_n^2) \right] \to \mathbb{E} \left[f_1(X_1) \right] \mathbb{E} \left[f_2(X_2) \right] \end{equation} So far so good. Now in the proof of Lemma B.2 it seems to me that they implicitly say, that this would already mean that the law of $(X_n^1,X_n^2)$ converges weakly to the measure product of the laws of $X^1$ and $X^2$.
But this is not the definition of weak convergence since there should be more continuous and bounded continuous $f: S \times S \to \mathbb{R}$ than just the products of two functions $f_1,f_2 : S \to \mathbb{R}$.
I have seen (for example here: Approximating continuous functions on a product space) that you can use Stone-Weierstraß Theorem to approximate continuous functions in such a way if the space $S$ is compact. In my scenario this is not the case.
It might be that there is something special in the structure of the space that they are working with in the paper that i have not understood yet. In this case i cannot expect anyone here to help me, since it's very specific as i said.
Though maybe there is a general answer and hence somebody has an idea why it is implicit in the paper that the equation above implies distributional convergence on the product space.
Thank you in advance.
This seems very strange, because the condition doesn't depend on joint distribution of $X_1$ and $X_2$ at all, while weak convergence does.
Let all $X_n^1$ and $X_n^2$ be uniform on $\{0, 1\}$ and independent on each other. Let $X_1 = X_2$ also be uniform on $\{0, 1\}$
Then $\mathbb{E} \left[f_1(X_n^1)f_2(X_n^2) \right] = \frac{(f_1(0) + f_1(1))(f_2(0) + f_2(1))}{4} = \mathbb{E} \left[f_1(X_1) \right] \mathbb{E} \left[f_2(X_2) \right]$.
But if we take $f$ s.t. $f(0, 0) = f(1, 1) = 1$, $f(0, 1) = f(1, 0) = 0$, then $\mathbb{E} f(X_n^1, X_n^2) = \frac{1}{2} \not\to 1 = \mathbb{E} f(X_1, X_2)$.