In a previous post I asked help to clarify a property of stable convergence in distribution:
Definition
Let $X_n$ be a sequence of random variables defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with value in $\mathbb{R}^N$. We say that the sequence $X_n$ converges stably in distribution with limit $X$, written $X_n\stackrel{\text{st}}{\longrightarrow} X$, if and only if, for any bounded continuous function $f:\mathbb{R}^N\to\mathbb{R}$ and for any $\mathcal{F}$-measurable bounded random variable $W$, it happens that: $$ \lim_{n\rightarrow \infty}\mathbb{E}[f(X_n)\,W]=\mathbb{E}[f(X)\,W]. $$
What I need to prove now is the following:
Assume $$ (Y_n,Z)\stackrel{\text{d}}{\longrightarrow}(Y,Z), $$
for all measurable random variable $Z$, then
$$ (Y_n,Z)\stackrel{\text{st}}{\longrightarrow}(Y,Z) $$ for all measurable random variables $Z$. So I need to prove that, for any bounded continuous function $f$ and for any measurable $Z$ it holds that $$ \lim_{n\rightarrow \infty}\mathbb{E}[f(Y_n,Z)\,W]=\mathbb{E}[f(Y,Z)\,W] $$ for all bounded random variables $W$.
I tried unsuccessfully with Portmanteau and Levy continuity theorem…
=================================================================
In practice I am trying to prove this proposition from the paper by Podolskij and Vetter:

I did this reasoning for (1)=>(3), but I am not so sure of its correctness.

$\def\dto{\xrightarrow{\mathrm{d}}}\def\stto{\xrightarrow{\mathrm{st}}}\def\mto{\xrightarrow{\mathrm{m}}}$$(3) \Rightarrow (2)$: Trivial.
$(2) \Rightarrow (1)$: For any $g \in C_b(\mathbb{R}^N)$ and bounded $\mathscr{F}$-measurable $W$, suppose $|W| \leqslant M$. Take\begin{align*} f: \mathbb{R}^N × \mathbb{R} &\longrightarrow \mathbb{R},\\ (y, z) &\longmapsto g(y) · \frac{1}{2} (|z + M| - |z - M|). \end{align*} Because $(Y_n, W) \dto (Y, W)$ and $f \in C_b(\mathbb{R}^{N + 1})$, then$$ E(g(Y_n) W) = E(f(Y_n, W)) \to E(f(Y, W)) = E(g(Y) W). \quad n \to \infty $$ Therefore, $Y_n \stto Y$.
$(1) \Rightarrow (3)$: Suppose $Z$ and $W$ are $\mathscr{F}$-measurable and $W$ is bounded. First, for any $A \in \mathscr{B}(\mathbb{R}^N)$ and $B \in \mathscr{B}(\mathbb{R})$, there exists $\{g_k\} \subseteq C_b(\mathbb{R}^N)$ such that $g_k \mto I_A$, i.e.$$ m(\{ x \in \mathbb{R}^N \mid g_k(x) \neq I_A(x)\}) \to 0. \quad k \to \infty $$ For any $k \geqslant 1$, because $Y_n \stto Y$ and $I_B(Z) W$ is $\mathscr{F}$-measurable and bounded, then$$ E(g_k(Y_n) I_B(Z) W) \to E(g_k(Y) I_B(Z) W). \quad n \to \infty $$ Note that $g_k \mto I_A$ and $I_B(Z) W$ is bounded, thus$$ E(I_A(Y_n) I_B(Z) W) \to E(I_A(Y) I_B(Z) W). \quad n \to \infty \tag{1} $$
Now, for any $C \in \mathscr{B}(\mathbb{R}^{N + 1})$, there exists $\{A_{k, j}\} \subseteq \mathscr{B}(\mathbb{R}^N)$ and $\{B_{k, j}\} \subseteq \mathscr{B}(\mathbb{R})$ such that $\{h_k\}$ defined by\begin{align*} h_k : \mathbb{R}^N × \mathbb{R} &\longrightarrow \mathbb{R},\\ (y, z) &\longmapsto \sum_{j = 1}^{s_k} I_{A_{k, j}}(y) I_{B_{j, k}}(z) \end{align*} satisfies $h_k \mto I_C$. For any $k \geqslant 1$, from (1) there is$$ E(h_k(Y_n, Z) W) \to E(h_k(Y, Z) W). \quad n \to \infty $$ Because $h_k \mto I_C$ and $W$ is bounded, then$$ E(I_C(Y_n, Z) W) \to E(I_C(Y, Z) W). \quad n \to \infty \tag{2} $$
Now, for any $f \in C_b(\mathbb{R}^{N + 1})$, there exists a sequence of simple functions $\{f_k\}$ such that $f_k \rightrightarrows f$. For any $k \geqslant 1$, from (2) there is$$ E(f_k(Y_n, Z) W) \to E(f_k(Y, Z) W). \quad n \to \infty $$ Because $f_k \rightrightarrows f$ and $W$ is bounded, then$$ E(f(Y_n, Z) W) \to E(f(Y, Z) W). \quad n \to \infty $$ Therefore, $(Y_n, Z) \stto (Y, Z)$.