Change of measure only applied on parts of a joint distribution.

28 Views Asked by At

I found the following in a paper, but I don't know how to explicitly write it down.

Let $U_1,...,U_n,U^*_1,...U^*_n$ be mutually independent random vectors with probability measure $P = P_{\theta_0}$ and denote by $\mathcal{P}_n$ their joint probability measure.

Furthermore let $p_{\theta_n}$ and $p$ be the density function of $P_{\theta_n}$ and $P$ respectively.

Now we define a change of measure by $\frac{d\mathcal{P}^*_n}{d\mathcal{P}_n} = \prod_{i=1}^{n} \frac{p_{\theta_n}(U^*_i)}{p(U^*_i)}$.

As I understand this, the change of measure only applies to the last n elements of the joint distribution, i.e. $U^*_1,...,U^*_n$.

Now the in the paper it is stated, that under $\mathcal{P}^*_n$,$U_1,...,U_n$ are mutually independent with probability measure $P$, while conditionally on the sigma-algebra generated by $U_1,...,U_n$, the random vectors $U^*_1,...,U^*_n$ are mutually independent with probability measure $P_{\theta_n}$.

How do I prove this formally? I can describe it with words, that because the change of measure only applies on the last elements, their probability measure is $P_{\theta_n}$ while the first elements still have the original probability measure $P$. But how do I write that down?

Until this paper, I didn't even know, that a change of measure can only apply on some parts.

I'm happy for any help and open for remarks.

Thanks a lot!