Proving CLT on a random variable by working with its square

107 Views Asked by At

Excuse me if this question is too basic or vague. I have a continuous real random variable say $X$ that is a non-trivial non-explicit function of several independent random variables. My goal is to prove a CLT for this random variable as the number of these independent random variables goes to infinity. The function is non-explicit, but I can somehow get a 'semi-'explicit function for the square of this random variable. My intuition tells me that if the variable $X$ is asymptotically $\mathcal N(0,1)$ then $X^2$ is asymptotically $\mathcal N (0,1)^2$, although I have no clue how to show this. On the other hand, using some techniques I might be able to show $X^2$ is $\mathcal N(\mu,\sigma^2)$ for some $\mu,\sigma >0$. I could then proceed to show $X$ is also Gaussian by the delta method. But this is contradictory to the earlier intuition that $X^2$ should be $\mathcal N (0,1)^2$.

So my question is : why is there a seeming contradiction between these two reasonings. Is any of them valid?

1

There are 1 best solutions below

0
On BEST ANSWER

If $X_n$ converges to $X$ in distribution, the continuous mapping theorem can be used to claim $g(X_n)$ converges to $g(X)$ in distribution (where $g$ is a continuous function):

https://en.wikipedia.org/wiki/Continuous_mapping_theorem


Now if $G$ is Gaussian with mean 0 and variance 1, and if we define $X=G, Y=|G|$, we see that $X^2$ and $Y^2$ have the same distribution, but $X$ and $Y$ do not. This example shows that, in general, we cannot recover the distribution of $X$ from the distribution of $X^2$.

However, we can recover the distribution of $|X|$ from that of $X^2$.

Also, if we know that $X$ has a "symmetry" property, namely that $X$ and $-X$ have the same distribution, then we can indeed recover the distribution of $X$ from that of $|X|$ (and hence from that of $X^2$). Indeed if $a,b$ are real numbers that satsify $0< a<b$ then \begin{align} P[|X| \in [a,b]] &= P[X \in [a,b]] + P[X \in [-b,-a]] \\ &= P[X \in [a,b]] + P[-X \in [a,b]] \\ &\overset{(a)}{=}P[X \in [a,b]] + P[X \in [a,b]] \\ &= 2P[X \in [a,b]] \end{align} where (a) holds by the assumption that $X$ and $-X$ have the same distribution.