Drawing a sample from a measure

132 Views Asked by At

I am reading Perfect Simulation, Huber. I'm having some trouble interpreting the following text (Section 2.2):

Let $\mu$ be a measure over $\Omega$, and suppose for density $g$ it is possible to draw from the product density $\mu \ \times$ Unif$(\Omega_g)$ where $\Omega_g = (\{ (x,y) : x \in \Omega, 0 \leq y \leq g \})$.

Once this draw is made, and the $Y$ component is thrown away, then the remaining $X$ component has density $g$.

I have the following questions with regard to the above snippet:

  1. How does the X component have density $g$? I can understand the case when $X$ is uniformly distributed, that $X \sim g$, but what if $X$ is normally distributed? Then wouldn't the $X$ be distributed normally and $Y \sim g$?.

  2. Does "$X$ is distributed uniformly" actually mean "$X$ is distributed uniformly according to $\mu$"?

  3. The expression $\mu \ \times$ Unif$(\Omega_g)$ has been called both "product density" and product measure. I'm assuming that's just a bit of notation abuse since it is the product of a measure and a density function. Is that correct?

I would appreciate any clarity on this!

1

There are 1 best solutions below

1
On BEST ANSWER

Here is the translation of the result into what is hopefully a more familiar language:

Theorem: Let $(\Omega,\Sigma,\mu)$ be a probability space and $\nu$ a probability measure that admits a density $g:\Omega\to\mathbb{R}$. Let $\lambda$ be the Lebesgue measure restricted to the Borel sets of $\mathbb{R}$ and $\mu\otimes\lambda$ be the product measure obtained from $\mu$ and $\lambda$. Now, for every $A\in\Sigma,$

$$\nu(A)=\mu\otimes\lambda\Big(\big\{(\omega,y)\in\Omega\times\mathbb{R}\mid \omega \in A \text{ and } 0\le y\le g(\omega)\big\}\Big).$$

Proof: Using Fubini's theorem:$$\mu\otimes\lambda\Big(\big\{(\omega,y)\in\Omega\times\mathbb{R}\mid \omega \in A \text{ and } 0\le y\le g(\omega)\big\}\Big)=\int_A\int_{[0,g(\omega)]}1~\mathrm d\lambda~\mathrm d\mu=\int_A g(\omega)~\mathrm d\mu=\nu(A).$$ $\blacksquare$

Here is how this result is used: You might have a nice procedure to calculate $g(\omega)$ from $\omega$ and to simulate draws from $\mu$ and decide whether they lie in $A$, but not be directly able to solve the integral $\int_A g~\mathrm d\mu$. Suppose $g$ is actually bounded with values in $[0,M]$. Now you randomly draw a point $\omega$ according to $\mu$ and a point $y$ uniformly from $[0,M]$. You calculate $g(\omega)$ and if $\omega\in A$ and $y\le g(\omega)$, you mark this a success. You repeat this (infinitely) often independently. Then with probability one, the relative frequency of successes multiplied by $M$ is going to be $\nu(A)$.