Bound third moment and Fourth moment

2.7k Views Asked by At

1) I have seen it mentioned that if X is a zero mean random variable then $\frac{E[X^4]}{\sigma^4} \geq \left\{\frac{E[X^3]}{\sigma^3}\right\}^2 + 1$ , where $E[X^2]=\sigma^2$ .

Is this true, if yes ,what is the proof for this? (I am able to prove that $\frac{E[X^4]}{\sigma^4} \geq \left\{\frac{E[X^3]}{\sigma^3} \right\}^2 $)

2) Similar to the above statement, is it possible to find an upper bound or lower bound for the third moment $E[X^3]$ in terms of the second moment $E[X^2]$ , for a zero mean $X$?

I had tried using Holder's and Jensen's inequality to prove these, but it didn't give me the right direction for the bounds. I got bounds like $E[X^3] \leq \sqrt{E[X^2]\cdot E[X^4]} )$. This gives me that $E[X^4] \geq \{E[X^3]/\sigma\}^2$

2

There are 2 best solutions below

5
On

This is a partial answer to some of the questions. It gives details on my above comments. (Answering will also help me find this question again later if I want.)

On upper/lower bounds for $E[X^3]$

If $E[X]=0$ and $E[X^2]$ is finite, this places no upper or lower bound on $E[X^3]$. Indeed, define $X$ with PDF $$ f_X(x) = (1/2)\delta(x+1/2) + 1\{x\geq 0\} \frac{(3/2)}{(x+1)^4} \quad \forall x \in \mathbb{R}$$ where $1\{x \geq 0\}$ is an indicator function that is 1 if $x\geq 0$ (and zero else); $\delta(x+1/2)$ is a unit impulse at $x=-1/2$. So $P[X=-1/2]=P[X\geq 0]=1/2$ and indeed $\int_{-\infty}^{\infty} f_X(x)dx = 1$. Also, $$ E[X]=0, E[X^2] = 5/8, E[X^3] = \infty $$ Fix $a> 0$. Defining $Z=aX$ gives $E[Z]=0$, $E[Z^3]=\infty$, and $E[Z^2]$ can be any positive value we like (by scalaing $a>0$). Likewise if we define $Y=-X$, then we see $E[Y^3]=-\infty$ even though $E[Y]=0$ and $E[Y^2]$ is finite.

On the conjecture $\frac{E[X^4]}{\sigma^4} \geq 1 + \frac{E[X^3]^2}{\sigma^6}$ for all $X$ such that $E[X]=0$

(Assuming $\sigma^2=Var(X)>0$.) As in my comments above, this is true in the special case $E[X^3]=0$, since it reduces to $E[X^4] \geq E[X^2]^2$ (which is true by Jensen's inequality).

In the general case $E[X^3]\neq 0$, by defining $Y=X/\sigma$, we see the desired inequality is true if and only if $$E[Y^4] \geq 1 + E[Y^3]^2$$ for all random variables $Y$ with $E[Y]=0$, $E[Y^2]=1$.

On tightness of $E[X^4] \geq 1 + E[X^2]^2$

As in my comments above, fix $m\geq 0$. We can design a random variable $X$ that satisfies $E[X]=0, E[X^2]=1, E[X^3]=m$, and that satisfies $E[X^4] = 1 + E[X^2]^2$. This is achieved by: \begin{align} X &= \left\{ \begin{array}{ll} a &\mbox{ with prob $1/(1+a^2)$} \\ (-1/a) & \mbox{ with prob $a^2/(1+a^2)$} \end{array} \right. \\ a &= (1/2)\left(m + \sqrt{m^2+4}\right) \end{align} Indeed, with $m\geq 0$ we see that $a \geq 1$ and $$ E[X] = \frac{(a)(1)}{1+a^2} + \frac{(-1/a)(a^2)}{1+a^2} = 0$$ $$ E[X^2] = \frac{(a)^2(1)}{1+a^2} + \frac{(-1/a)^2(a^2)}{1+a^2} = 1$$ $$ E[X^3] = \frac{(a)^3(1)}{1+a^2} + \frac{(-1/a)^3(a^2)}{1+a^2} = \frac{a^2-1}{a} = m $$ $$ E[X^4] = \frac{(a)^4(1)}{1+a^2} + \frac{(-1/a)^4(a^2)}{1+a^2} =1 + \underbrace{\frac{(a^2-1)^2}{a^2}}_{E[X^3]^2}$$ Scaling $X$ by $\sigma$ gives cases of zero-mean variables with $\frac{E[X^4]}{\sigma^4} = 1 + \frac{E[X^3]^2}{\sigma^6}$.

Defining $Y=-X$ gives $E[Y]=0, E[Y^2]=1, E[Y^3]=-m$ and $E[Y^4] = 1 + E[Y^2]^2$.

0
On

This answer proves that if $X$ is a random variable that satisfies $E[X]=0$ and $E[X^2]=1$, $E[X^3] \in \mathbb{R}$, then $$ E[X^4] \geq 1 + E[X^3]^2$$ As in my other answer, the scaling $X=Y/\sigma$ proves $E[(Y/\sigma)^4] \geq 1 + E[(Y/\sigma)^3]^2$.


Fix $m \geq 0$. In my other answer I constructed the following random variable to show tightness of the conjectured inequality:
\begin{align} X^* &= \left\{ \begin{array}{ll} a &\mbox{ with prob $1/(1+a^2)$} \\ (-1/a) & \mbox{ with prob $a^2/(1+a^2)$} \end{array} \right. \\ a &= (1/2)\left(m + \sqrt{m^2+4}\right) \end{align} Here I show that this random variable solves the problem of minimizing $E[X^4]$ subject to $E[X]=0$, $E[X^2]=1$, $E[X^3]=m$, and that the minimizing value is: $$ E[X^4] = c = 1 + m^2 $$ Once we show this, it follows that for any random variable $X$ with $E[X]=0, E[X^2]=1$, $E[X^3]\geq 0$ we get: $$ E[X^4] \geq 1+E[X^3]^2 $$ Now if $X$ is a random variable that satisfies $E[X]=0, E[X^2]=1, E[X^3]<0$, we can define $Y=-X$ to conclude $$ E[X^4]=E[Y^4] \geq 1 + E[Y^3]^2 = 1 + E[X^2]^2 $$ So the desired result would be true regardless of whether $E[X^3]\geq 0$ or $E[X^3]<0$.


Fix $m \geq 0$. Define $\mathcal{S}$ as the space of all random variables $X$ that satisfy $E[X^4]<\infty$. Define the set $\mathcal{A} \subseteq \mathbb{R}^4$ by: $$ \mathcal{A} = \{\left(E[X], E[X^2], E[X^3], E[X^4]\right) : X \in \mathcal{S}\} $$ Consider the following two optimization problems:

  • Problem 1 (Constrained problem):
    \begin{align} \mbox{Minimize:} \quad & y_4 \\ \mbox{Subect to:} \quad & y_1 = 0 \\ & y_2 = 1\\ & y_3 = m\\ & (y_1, y_2, y_3, y_4) \in \mathcal{A} \end{align}

  • Problem 2 (Unconstrained problem): Given real numbers $\lambda_1, \lambda_2, \lambda_3$, we have \begin{align} \mbox{Minimize:} \quad & y_4 + \lambda_1 y_1 + \lambda_2 (y_2-1) + \lambda_3 (y_3-m) \\ \mbox{Subect to:} \quad & (y_1, y_2, y_3, y_4) \in \mathcal{A} \end{align}

Lemma: (Lagrange multiplier theorem) If $(y_1, y_2, y_3, y_4)$ is a solution to the unconstrained problem that satisfies $y_1=0, y_2=1, y_3=m$, then it is also a solution to the constrained problem.

Proof: Clearly $(y_1, y_2, y_3, y_4)$ satisfies the constraints of the constrained problem. Let $(w_1, w_2,w_3,w_4)$ be another vector that satisfies the constraints of the constrained problem. We want to show that $y_4 \leq w_4$. Since $(w_1, w_2,w_3,w_4) \in \mathcal{A}$ and $(y_1, y_2, y_3, y_4)$ solves the unconstrained problem over all vectors in $\mathcal{A}$ we get $$ y_4 + \lambda_1y_1 + \lambda_2(y_2-1) + \lambda_3(y_3-m) \leq w_4 + \lambda_1w_1 + \lambda_2(w_2-1) + \lambda_3(w_3-m) $$ However, since $y_1=w_1=0$, $y_2=w_2=1$, $y_3=w_3=m$, the above inequality reduces to $$ y_4 \leq w_4$$ $\Box$

Now define $a= (1/2)\left(m + \sqrt{m^2+4}\right)$ (this is consistent with the definition of $a$ for the 2-valued random variable $X^*$ above). Note that $a \geq 1$. Define the following particular real numbers: \begin{align} \lambda_1 &= -2/a + 2a\\ \lambda_2 &= 1/a^2 - 4 + a^2\\ \lambda_3 &= -2a+2/a\\ c &= 1+m^2 \end{align} Define the function $f:\mathbb{R}\rightarrow\mathbb{R}$ by $$ f(x) = x^4 + \lambda_1x + \lambda_2(x^2-1) + \lambda_3(x^3-m)$$ I designed these $\lambda_i$ values to get the following nice factorization: For all $x \in \mathbb{R}$ we have: $$ f(x) = (x-a)^2(x+1/a)^2+c \geq c $$ Thus for all random variables $X \in \mathcal{S}$ we have: $$ f(X) \geq c $$ and so $$ E[f(X)] \geq c$$ That is, $$E[X^4] + \lambda_1E[X] + \lambda_2(E[X^2]-1) + \lambda_3(E[X^3]-m) \geq c $$ On the other hand, the 2-valued random variable $X^*$ defined above takes values either $a$ or $-1/a$, and so for all realizations of that random variable we get $$ f(X^*) = (X^*-a)^2(X^*+1/a)^2+c = c $$ And so $E[f(X^*)]=c$. Thus, that 2-valued random variable minimizes the expression $$ E[X^4] + \lambda_1E[X] + \lambda_2(E[X^2]-1) + \lambda_3(E[X^3]-m)$$ over all random variables $X \in \mathcal{S}$. In particular, its moments $$(y_1^*,y_2^*,y_3^*,y_4^*) = (E[X^*], E[(X^*)^2], E[(X^*)^3], E[(X^*)^4])$$ solve the unconstrained optimization problem. Further, from my previous answer, we know those moments for $X^*$ satisfy $y_1^*=0, y_2^*=1, y_3^*=m$, and so by the Lagrange multplier lemma we conclude that $(y_1^*,y_2^*,y_3^*,y_4^*)$ is also optimal for the constrained problem. In particular, the value $y_4^*$ achieved by this 2-valued random variable is the minimum possible value of $E[X^4]$ over all random variables that satisfy $E[X]=0, E[X^2]=1, E[X^3]=m$. In fact, we know: $$ y_4^* = E[(X^*)^4] = 1+m^2 =c$$ $\Box$