I understand that a random variable $X$ and a probability measure $P$ on a space $(\Omega,\mathcal{A})$ induce the distribution $P_X$ on a space $(\Omega',\mathcal{A}')$.
But is there an example where it is important to differentiate between the distribution $P_X$ (the pushforward measure) and the probability measure $P$?
Is there a theorem that deals with different distributions $(P_X)_n$ but only with one probability measure $P$?
Or is this distinguishing between the two measure only formal?
Many characteristics of a random variable (the mean, variance, characteristic function, etc.) depend only on the distribution of that random variable. In some sense, writing down a triple like $(\Omega, \mathcal A, \mathbb P)$ is quite artificial.
I was once working on some problem with a probabilist. When I mentioned $\omega \in \Omega$, he remarked that means I was doing not probability but measure theory.