Truncated variance less than variance?

911 Views Asked by At

Let the random variable X be distributed (not necessarily normally) with some mean $\mu$ and some variance $\sigma^2$. Under what conditions is it true that

$$ var(X|X<a) \leq var(X)$$ for some $a$ in the support of $X$? I can show it to be true for a normal random variable, and I believe it to be true for the Gumbel and logistic distributions (I just compute the variances in R for some different parameters). It is not true for the gamma distribution.

We can make assumptions on the support of $X$, or on, say, the log-concavity. But how do I guarantee the statement?

2

There are 2 best solutions below

0
On BEST ANSWER

Without loss of generality, and simplicity of calculations, assume $E[X]=\mu=0$. Subtracting a constant from a random variable does not change its variance. Define Y to be the truncated random variable: $$f_y(y)= b f_x(y)$$ for $y<a$ and zero otherwise, where $b^{-1}=\int_{-\infty}^a f_x(x)dx\le1$. Then, $$E[Y^2] = \int_{-\infty}^a y^2 b f_x(y)\, dy = b\left(E[X^2]-\int_a^{\infty} y^2 f_x(y)\, dy\right)$$ $$E[Y] = \int_{-\infty}^a y b f_x(y)\, dy = b\left(E[X]-\int_a^{\infty} y f_x(y)\, dy\right)= -b\int_a^{\infty} y f_x(y)\, dy$$ so, $$V[Y]=E[Y^2]-E[Y]^2=bV[X] - b\int_a^{\infty} y^2 f_x(y)\, dy -b^2\left(\int_a^{\infty} y f_x(y)\, dy\right)^2$$ Define Z to be the random variable: $$f_z(z)= c f_x(z)$$ for $z>a$ and zero otherwise, where $c^{-1}=\int_a^{\infty} f_x(x)dx = 1-b^{-1}$. $$V[Y]=bV[X] - {b\over c}E[Z^2] -\left({b\over c}E[Z]\right)^2=bV[X] - (b-1)E[Z^2] -\left((b-1)E[Z]\right)^2$$ $$V[Y]=bV[X] - (b-1)V[Z] -b(b-1)E[Z]^2$$ $$\Delta=V[Y]-V[X]=(b-1)\left(V[X]-V[Z]-bE[Z]^2\right)$$ If $b=1$, then $V[Y]=V[X]$, otherwise $b>1$ in which case, $V[Y] \le V[X]$ iff: $$V[Z]+bE[Z]^2 \ge V[X]$$ where $Z$ is the random variable with a distribution corresponding to the removed part of $X$ for the case $\mu=0$. For the general case, the condition is: $$V[Z]+bE[Z-\mu]^2 \ge V[X]$$

4
On

Here's a much quicker answer than the accepted one:

As in @Dean's solution, without loss of generality assume $E[X]= 0$.

Let

  • $X^+ = \begin{cases}X : X > a \\ 0 : \text{otherwise} \end{cases}$
  • $X^-=\begin{cases}X : X \le a \\ 0 : \text{otherwise} \end{cases}$

Clearly, $X = X^+ + X^-$. Also note that $X^+X^-=0$ (with probability $1$) because they cannot both be nonzero simultaneously.

\begin{align*} Var[X] &= E[X^2] - \underbrace{E[X]^2}_{=0} \\ &= E[(X^+ + X^-)^2]\\ &= E[(X^+)^2) + (X^-)^2 + 2 X^+X^-]\\ &= E[(X^+)^2)] + E[(X^-)^2] + 2 E[\underbrace{X^+X^-}_{=0}]\\ &= E[(X^+)^2)] + E[(X^-)^2]. \end{align*}

Now, note \begin{align*} Var[X^+] &= E[(X^+)^2] - E[X^+]^2\\ &\le E[(X^+)^2] \qquad \qquad \hspace{2em} \qquad \qquad \text{(since $E[X^+]^2\ge 0$)}\\ &\le E[(X^+)^2]+E[(X^-)^2]\qquad \qquad \qquad \text{(since $E[(X^-)^2]\ge 0$)}\\ &= Var[X]. \end{align*}