Validity of Monte Carlo

160 Views Asked by At

My question regards the fundamental validity of the concept of Monte Carlo. In the text where I learned about Monte Carlo some time ago and also on all resources I found on the internet, all authors seem to be brushing over the same thing. Typically, people say they want to get an idea of what an expectation such as $\mathbb{E}X$ for some random variable $X$ might look like.Then defining the 'sample average' $$X^N=\frac{1}{N}\sum_i^N X_i$$, we have the SLLN which says that $$\mathbb{P}\{w \in \Omega:X^N(w) \rightarrow \mathbb{E}X\}=1$$ as $n\rightarrow \infty$.This is then considered a 'proof' of why it makes sense to simulate the same random variable repeatedly and take the average which gives a good approximation of $\mathbb{E}X$. On the other hand what we are really programming is $$\frac{1}{N}\sum_i^N X(w^i).$$ So in the upper example we are summing over multiple random variables and in practice we are summing over stochastic samples $\omega$ of the same random variable. Can somebody please help to put these things together?