Let $\mu_n$, $\mu$ be random probability measures on a Polish space for all $n\geq 1$. Also let $m_n$, $m$ be the mean measures of $\mu_n$ and $\mu$, respectively - so for example $m(\cdot)=E[\mu(\cdot)]$.
Now, suppose that: i) for every bounded measurable function $h(x)$, $\int h(x)d\mu_n(x)\rightarrow \int h(x)d\mu(x)$ in distribution as $n\rightarrow+\infty$ (this in particular implies that $\mu_n$ converges vaguely in distribution to $\mu$, in the sense of Chapter 4 of Kallenberg, 2017); ii) $g(x)$ is $m_n$-uniformly integrable function, i.e. a measurable function such that $$\sup_n\int |g(x)|I\left\{|g(x)|> M\right\}dm_n(x)\rightarrow 0$$ as $M\rightarrow 0$. Can we conclude that $\int g(x)d\mu_n(x)\rightarrow\int g(x)d\mu(x)$ in distribution as well?
To give you some context, I'm trying to understand the proof of Theorem 4.16 of Ghosal and van der Vaart (2017), where the above claim is the very last step of their argument. In the book, they note that $g(x)$ can be written as the sum of $g(x)I\left\{|g(x)|\leq M\right\}$, which is bounded, and $g(x)I\left\{|g(x)|> M\right\}$. Then they claim that the thesis follows because $$E\left[\int |g(x)|I\left\{|g(x)|> M\right\}d\mu_n(x)\right]=\int |g(x)|I\left\{|g(x)|> M\right\}dm_n(x)$$ can be made arbitrarily small by choosing large $M$, uniformly in $n$. Without more details, though, I am not able to see how the claim follows from this.
References:
Kallenberg O. (2017). Random measures: theory and applications. Springer.
Ghosal S., van der Vaar A. (2017). Fundamentals of Bayesian non-parametric inference. Cambridge University Press.
I'm not familiar with random probability measures... so let me know if I misunderstood something.
By assumption (i), we have
$$X_n^{(M)} := \int g 1_{|g| \leq M} \, d\mu_n \to X^{(M)} := \int g 1_{|g| \leq M} \, d\mu \tag{1}$$
in distribution for each fixed $M>0$. Consequently, it remains to discuss away the truncation. Set
$$X_n := \int g \, d\mu_n \qquad X := \int g \, d\mu, $$
and pick some bounded uniformly continuous function $f$. Clearly,
$$|\mathbb{E}f(X_n)-\mathbb{E}f(X)| \leq I_1 + I_2 + I_3 \tag{2}$$
where
\begin{align*} I_1 &:= \sup_{n \in \mathbb{N}} |\mathbb{E}(f(X_n)-f(X_n^{(M)}))| \\ I_2 &:= |\mathbb{E}f(X_n^{(M)})-\mathbb{E}f(X^{(M))})| \\ I_3 &:= |\mathbb{E}f(X^{(M)})-\mathbb{E}f(X)|. \end{align*}
From $(1)$ we know that $I_2$ converges to $0$ as $n \to \infty$ (for fixed $M$), and therefore it follows from $(2)$ that
$$\limsup_{n \to \infty} |\mathbb{E}f(X_n)-\mathbb{E}f(X)| \leq I_1+I_3.$$
We will now show that the terms on the right-hand side become small f we choose $M$ sufficiently large. For $I_3$ that's quite immediate. Since $X^{(M)} \to X$ pointwise and $f$ is bounded and continuous, it follows from the dominated convergence theorem that $I_3 \to 0$ as $M \to \infty$. It remains to consider $I_1$. Fix $\epsilon>0$. Since $f$ is uniformly continuous, there is $\delta>0$ such that
$$|x-y| \leq \delta \implies |f(x)-f(y)| \leq \epsilon.$$
This implies that
\begin{align*} |\mathbb{E}(f(X_n)-f(X_n^{(M)}))| &\leq \epsilon + 2 \|f\|_{\infty} \mathbb{P}(|X_n-X_n^{(M)}| > \delta). \end{align*} To estimate the probability, we use Markov's inequality and the identity mentioned by the authors: \begin{align*} \mathbb{P}(|X_n-X_n^{(M)}| > \delta) \leq \frac{1}{\delta} \mathbb{E}(|X_n-X_n^{(M)}|) &\leq \mathbb{E}\left( \int |g| 1_{|g|>M} \, d\mu_n \right) \\ &= \int |g| 1_{|g|>M} \,dm_n. \end{align*}
Taking the supremum over all $n$, we get
$$I_1 \leq \epsilon + 2 \|f\|_{\infty} \frac{1}{\delta} \sup_{n \in \mathbb{N}} \int |g| 1_{|g|>M} \,dm_n.$$
Choosing $M>0$ sufficiently large, we can achieve that $I_1 \leq 2 \epsilon$. As $\epsilon>0$ was arbitrary, this gives
$$\limsup_{n \to \infty} |\mathbb{E}f(X_n)-\mathbb{E}f(X)| =0,$$
and so $X_n \to X$ in distribution.