Convergence in probability and convergence in distribution in the context of OLS

55 Views Asked by At

What I am missing here? Let there be three iid variables $(W_i, D_i, U_i)$ where $U_i$ may have an error interpretation in a traditional OLS context. That is $E[W_iU_i]=0$ and $E[D_iU_i]=0$. Moreover, everything is properly defined, second moments and covariances across all three variables exist and for simplicity the expectations of all three variables equal $0$.

Consider now the following problem: We are interested in identifying the limiting behavior (as n grows to $\infty$) of these two expressions $\frac{1}{n}\sum{W_i^2}\frac{1}{\sqrt{n}}\sum{D_iU_i}$ which should be completely identical to $\frac{1}{\sqrt{n}}\sum{W_i^2}\frac{1}{n}\sum{D_iU_i}$. This kind of settings are very common in OLS consistency proofs. These problems are usually tackled by invoking some sort of CLT on the error component (which will be distributed as a $N(0, E[D_i^2U_i^2])$ and some convergence in probability on the first term. We apply Slutsky Theorem and we are essentially done.

However, and even if I strongly belive I am wrong, I don't manage to understand why the opposite does not lead to the same result. That is, apply the CLT over the $W_i^2$ component (assuming the existence of fourth moments) and then use the convergence in probability of $\frac{1}{n}\sum{D_iU_i}$ to $E[D_iU_i] = 0$. But, and this is the key problem. Then we will obtain deterministic result of the form $N(0,0)$ rather than $N(0,\sigma_{w}^2E[D_i^2U_i^2])$.

What I am missing here? Thank you for your help!

1

There are 1 best solutions below

3
On BEST ANSWER

You misapplied CLT. Recall what classical CLT says:

$$X_i \text{ iid s.t.} E[X_i]=\mu,V(X_i)=\sigma^2<\infty\implies \sqrt n\left(\frac{1}{n}\sum_i X_i-\mu\right)\to_dN(0,\sigma^2).$$

Now consider

$$\left(\frac{1}{n}\sum_i{W_i^2}\right)\left(\frac{1}{\sqrt{n}}\sum_i{D_iU_i}\right).\quad (1)$$

WLLN tells us the first term converges in probability to $E[W_1^2]$; further, because $D_iU_i$ has mean zero, CLT tells us the second term converges in distribution to $N(0,E[D_1^2U_1^2]).$ Slutsky's theorem then tells us the whole expression converges in distribution to $N(0,(E[W_1^2])^2E[D_1^2U_1^2]).$

If we instead write the expression in $(1)$ as $$\left(\frac{1}{\sqrt n}\sum_i{W_i^2}\right)\left(\frac{1}{n}\sum_i{D_iU_i}\right),\quad (2)$$

we cannot apply CLT to the first term since $W_i^2$ does not have mean zero. If we demean the first term by writing $(2)$ as

$$\left[\sqrt n \left(\frac{1}{ n}\sum_i{W_i^2}-E[W_1^2]\right)+\sqrt nE[W_1^2]\right]\left(\frac{1}{n}\sum_i{D_iU_i}\right),$$

we see the first term diverges to infinity while the second term converges in probability to zero, so that the limiting expression is indeterminate. So writing the expression as in $(2)$ doesn't really help us find the asymptotic distribution.