Monte Carlo with non uniform weighting

333 Views Asked by At

So, I just want to check if what is in my mind is in fact true. Assume, that we have are given a distribution $p_{z}(k)$ over the whole $\mathbb{Z}^+$. We are interested in approximating $p_v(v)$ over some observed variable $v$. However, we are not given $p_v$ directly, but rather a joint $p(v,h) = p(h)p(v|h)$ such that the marginal $p_v(v)$ is intractable. Thus we seek to approximate $p_v(v)$ via importance sampling - e.g. taking sample from some $q(h)$ and using the fact that $$p_v(v) = \mathbb{E}\left[\frac{p(v,h)}{q(h)}\right]_q$$ Thus usually what is done is to use MC to approximate p(v) via: $$p_v(v) \approx \sum_{k=1}^K \frac{1}{K} \frac{p(v,h_k)}{q(h_k)}$$ According to the strong law of large numbers, the approximation will be exact when $K \to \infty$, under some mild conditions, which we assume are true. However, I can re write this as: $$ p_v(v) \approx \sum_{k=1}^K \frac{p_z(k)}{\sum_{i=1}^K p_z(i)} \frac{p(v,h_k)}{q(h_k)}$$ Where in the MC case $p_z(k)$ is a degenerate uniform distribution, however the limit still exists and converges, assuming we define appropriately $\frac{p_z(k)}{\sum_{i=1}^K p_z(i)}$. Now in the limit of infinite examples essentially we have: $$ \lim_{K \to \infty} \sum_{k=1}^K \frac{p_z(k)}{\sum_{i=1}^K p_z(i)} \frac{p(v,h_k)}{q(h_k)} = \mathbb{E}\left[\frac{p(v,h)}{q(h)}\right]_{p_z* q} = p_v(v)$$

Thus I have two questions. First, is my reasoning correct or is there some major flaw? Secondly, if there is no mistake, is it possible to pick better weights for the MC sampling (I think not) and why?