I'd like to estimate the expectation value estimation of two-variable function $\mathbb{E}[f(X, Y)]$ by using "nested" Monte Carlo integration, where $X$ and $Y$ are independent and could follows different distribution (could be same).
The "nested" Monte Carlo integration algorithm in my mind is as follows
sum = 0.0
for i in 1:N
x ~ p(x)
for i in 1:k * N
y ~ p(y)
sum += f(x, y)
return sum / (k*N^2)
Note that $k \in \mathbb{N}$. The reason why $y$ is sampled more than $x$ is that in my setting sampling $y$ is much easier than $x$.
Is the estimated value converge to the true expected value in some sense (e.g. in probability, almost surely), like in the ordinal Monte Carlo method?
Intuitively, if we consider the algorithm as Monte Carlo integration by sampling from cartesian product of two probability space then, it the convergence result should be the same as the ordinal one. However, the only difference here is that the number of samples for $x$ and $y$ is different and could not be considered as sampling from the cartesian probability space.