I'm kind of confused about the definition of convergence in probability of a random measure $\mu_{n}$ to a deterministic limiting measure $\mu$, which are probability measures on $\mathbb R$.
Question: Which notion of distance on the space of probability measures $\mathrm{Pr}(\mathbb R)$ should we use to have the following equivalence?
$$\mu_n \to \mu \text{ in probability } $$ if and only if $$\forall \phi \in C_c(\mathbb R),\ \int_\mathbb R \phi d\mu_n \to \int_\mathbb R \phi d\mu \text{ in probability } $$ where $C_c(\mathbb R)$ is the space of all compactly supported continuous functions.
I'm actually considering the empirical spectral measures if that clarifies the problem. I suspect that we are using Wasserstein distance and that the fact that $\mathbb R$ being $\sigma$-compact comes into play with countability...
Thanks in advance.
I found this link which basically answers the problem. The only difference is that they used $C_b(\mathbb R)$. This can be handled from the fact that $C_c(\mathbb R)$ is dense in $C_b(\mathbb R)$ in the supremum norm which can be easily shown by utilizing smooth cutoff functions.
There are many equivalent notions. The one you state is true but you do have to consider $C_{b}(\mathbb{R})$ . Because that is the definition of weak convergence and convergence of expectation for compatcly supported functions only guarantee vague convergence and not tightness. Another equivalent way is $P(|\phi_{\mu_{n}}(t)-\phi_{\mu}(t)|>\epsilon)\to 0$ for each $t\in \mathbb{R}$ where $\phi$ denotes the characteristic function. Similarly one can do this for Stieltjes transforms or moment generating functions as well(i.e. if $\mu$ is determined by moments)
The point being that convergence for $C_{c}(\mathbb{R})$ is only equivalent if you apriori know that $\mu$ itself is a probability measure. Otherwise it will not be true. That is the tricky part which you should keep in mind.
What I mean is the following:-
Let $\mu_{n}=\frac{1}{n}\sum_{k=1}^{n}\delta_{k}$ . Then for any $f\in C_{c}(\mathbb{R})$ you have that $\int f\,d\mu_{n}=\dfrac{\sum_{k=1}^{n}f(k)}{n}\to 0$ as $\lim_{n\to\infty}f(n)=0$ due to $f$ being compactly supported.
This means that $\mu_{n}\to 0$ , i.e. the $0$ measure which is not a probability measure.
But, if you apriori know that $\mu$ is a Probability measure, then convergence for $C_{c}$ functions is equivalent to convergence for $C_{b}$ functions precisely due to the reason you have stated. Then convergence for $C_{c}$ functions is indeed sufficient for the sequence $\mu_{n}$ to be tight.