How to standardize a random variable up to the fourth moment?

188 Views Asked by At

Suppose $X$ is a random variable such that $\operatorname*{E}(X^n) < \infty$ for any $n \in \mathbb{Z}$. We can standardize $X$ up to the second moment by letting $Y = \frac{X - \operatorname*{E}(X)}{\sqrt{\operatorname*{E}\left(X^2\right) - \operatorname*{E}(X)^2}}$ such that $\operatorname*{E}(Y) = 0$ and $\operatorname*{E}(Y^2) = 1$. How do we construct a random variable $Z$ from $X$ such that it satisfies the following properties? The right-hand side can be any constant. I picked 0's and 1's simply because they look nice.

\begin{equation} \begin{cases} \operatorname*{E}(Z) &= 0 \\ \operatorname*{E}(Z^2) &= 1 \\ \operatorname*{E}(Z^3) &= 0 \\ \operatorname*{E}(Z^4) &= 1 \\ \end{cases} \end{equation}

I would imagine we need some higher-order operations like square root or maybe logarithm, but I cannot wrap my head around it.

2

There are 2 best solutions below

1
On

No, this is not always possible. First, by Jensen's inequality, $E[Z^4] = E[(Z^2)^2] \ge E[Z^2]^2$, with equality holding iff $Z$ is actually a constant. That immediately rules out $E[Z^4] = E[Z^2]$ unless $Z^2$ is a constant.

More generally, if we don't require $\mathbb{E}[Z^4] = 1$ but still want to specify some value for $\mathbb{E}[Z^3]$ and $\mathbb{E}[Z^4]$, we still can't do this in general. For example, consider the case where $X=x_1$ with probability $p$ and $X=x_2$ with probability $q=1-p$. If $Z = f(X)$, then $Z$ can only have the values $a = f(x_1)$ or $b = f(x_2)$. We have that \begin{align*} \mathbb{E}[Z] &= pa + qb \\ \mathbb{E}[Z^2] &= pa^2 + qb^2 \\ \mathbb{E}[Z^3] &= pa^3 + qb^3 \\ \mathbb{E}[Z^4] &= pa^4 + qb^4. \end{align*} If we require $E[Z] = 0$ and $E[Z^2] = 1$, that immediately requires $a=\frac{-qb}{p}$ and $b^2 = \frac{p}{q^2+qp} = \frac{p}{q}$. This specifies $a$ and $b$ up to a sign, so we have no chance to choose $a$ and $b$ to make $\mathbb{E}[Z^3]$ or $\mathbb{E}[Z^4]$ any particular value.

0
On

Not every combination of moments can represent a probability distribution $P$ because probability distributions are positive and normalized functions (as noted in the previous answer), however, you can transform one continuous distribution to another using the inverse transformation method, so you can always transform the distribution to the (e.g.) Gaussian distribution with the Gaussian standard moments.