Proof that the Monte Carlo estimate $\bar{x}_n = \bar{x}_{n-1} + \alpha(x_n - \bar{x}_{n-1})$ converges to $\mathbf{E}[x]$

160 Views Asked by At

Where $x_i$ are iid samples from some distribution $x$. I've tried using Chebyshev's inequality (as per the proof of the law of large numbers) but it doesn't quite seem to work.

We can show by induction that $$\bar{x}_n = \sum_{i = 0}^{n}\alpha(1-\alpha)^{n-i}x_i$$

We have $$\mathbf{E}[x] = \mu$$ and $$\mathbf{var}[x] = \sigma^2$$

Because $x_i$ are iid, we have $$\mathbf{E}[\bar{x}_n] = \alpha \mu \sum_{i = 0}^{n}(1-\alpha)^{n-i} = \mu(1-\beta^n) $$ where $\beta = 1 - \alpha$, for $0 < \beta < 1$

So we can see that $\bar{x}_n$ is biased, but unbiased in the limit.

From similar reasoning we have $$\mathbf{var}[\bar{x}_n] = \sigma^2(1-\beta)(1-\beta^{2n-1})$$

From Chebyshev's inequality then, we have $$P(|\hat{x}_n - \mu(1-\beta^{2n-1})| \geq \epsilon) \leq \frac{\sigma^2(1-\beta)(1-\beta^{2n-1})}{\epsilon^2}$$

Which equals $$P(|\hat{x}_n - \mu| \geq \epsilon) \leq \frac{\sigma^2(1-\beta)}{\epsilon^2}$$ in the limit, provided we can take the limit across the probability (I'm not sure if we can do that). Which is not quite what we wanted. My analysis toolbelt is pretty limited so I'm not sure how to proceed. I don't know any measure theory, so would appreciate it if a proof with classical probability can be realised. Thanks!

1

There are 1 best solutions below

3
On BEST ANSWER

You can't prove it, because it is not true.

Writing $x_n=\left(\bar x_n-(1-\alpha)\bar x_{n-1}\right)/\alpha$ shows that if $(\bar x_n)$ converges to $\mu$, then $(x_n)$ also converges to $\mu$. But the i.i.d. sequence $(x_n)$ doesn't converge to $\mu$ so this is false, and hence $(\bar x_n)$ cannot converge to $\mu$.

I am using convergence in probability, but the same argument applies to other types of convergence as well.