Expected value of the maximum of $3$ normal variables

355 Views Asked by At

I want to prove that for $3$ variables $X_1,X_2,X_3$ from normal distribution with $\mu =0,\sigma =1$ for each, $\mathbb{E}(\max \{X_1,X_2,X_3\})\approx 0.85$.
I found the following formula: $\mu +\frac{3}{2\sqrt{\pi}}\sigma$, which shows this, but I do not know how to prove it. (It would be sufficient to prove it only for $\mu =0,\sigma =1$).
In addition I need to prove for a general $k\in \mathbb{N}$ that $\mathbb{E}(\max \{X_1,X_2,\ldots ,X_k\})<\mathbb{E}(\max \{X_1,X_2,\ldots ,X_{k+1}\})$.
How can this be shown? Thank you!

3

There are 3 best solutions below

0
On

For the calculation, the extreme value theorem might be helpful, in particular the Gumbel distribution.

For the proof, start by determining the distribution of the maximum of two normal random variables, then generalize the idea for larger $k$. For $k = 2$,

$$P(\max\{X_1, X_2\} \leq x) = P(X_1 \leq x \wedge X_2 \leq x),$$ which by independence is equal to $$P(X_1 \leq x) P(X_2 \leq x) = \Phi(x)^2,$$

where $\Phi$ is the standard normal CDF. Then you can calculate the expectation via the Layer-Cake representation $$E[Y] = -\int_{-\infty}^0 F_Y(x) dx + \int_{0}^{\infty} (1 - F_Y(x)) dx, $$ where for a random variable $Y$, $F_Y$ is its CDF. The trick is noticing that the integrand of the positive part gets bigger and integrand of the negative part gets smaller as $k$ increases...

0
On

I suppose the 3 r.v. are iid ?

You can start by computing the CDF of the max of your 3 r.v. by using the fact that if the max is less than x then all three variables are less than x and afterwards using their independency.

3
On

Denote $\max(X_1, X_2, X_3)$ by $X$. Assuming independence, it is easy to derive the distribution function of $X$ is $F(x) = \Phi(x)^3$, hence the density function of $X$ is $f(x) = F'(x) = 3\Phi^2(x)\phi(x)$, where $\Phi(x)$ and $\phi(x)$ are distribution function and density function of standard normal distribution. It thus follows that $$E(X) = \int_{-\infty}^\infty xf(x) dx = 3\int_{-\infty}^\infty x\Phi^2(x)\phi(x)dx = 3\int_{-\infty}^0x\phi(x)\Phi^2(x)dx + 3\int_0^\infty x\phi(x)\Phi^2(x)dx. $$

For the first integral, let $t = -x$ and use $\Phi(-t) = 1 - \Phi(t), \phi(-t) = \phi(t)$, it follows that \begin{align*} & \int_{-\infty}^0x\phi(x)\Phi^2(x)dx \\ =& -\int_0^\infty t\phi(t)(1 - \Phi(t))^2dt = -\int_0^\infty t\phi(t)(1 - 2\Phi(t) + \Phi^2(t))dt \\ =& -\int_0^\infty t\phi(t)dt + 2\int_0^\infty t\phi(t)\Phi(t)dt - \int_0^\infty t\phi(t)\Phi^2(t)dt. \end{align*} Therefore, \begin{align*} E(X) = -3\int_0^\infty t\phi(t)dt + 6\int_0^\infty t\phi(t)\Phi(t)dt, \end{align*} where (for the derivation of the second integral, see this answer) \begin{align*} & \int_0^\infty t\phi(t) dt = \frac{1}{\sqrt{2\pi}}\int_0^\infty te^{-t^2/2}dt = \frac{1}{\sqrt{2\pi}}\int_0^\infty e^{-y}dy = \frac{1}{\sqrt{2\pi}}, \\ & \int_0^\infty t\phi(t)\Phi(t) dt = \frac{1+\sqrt2}{4\sqrt\pi}. \end{align*} i.e., \begin{align} E(X) = \frac{3}{2\sqrt{\pi}}. \end{align}


Assume $X_1, \ldots, X_{k + 1}$ i.i.d $\sim N(0, 1)$, let $Z_i = \max(X_1, \ldots, X_i), i = k, k + 1$. To prove this inequality, it is easier to apply the expectation formula mentioned in @gandalfbalrogslayer's answer. It follows that \begin{align*} E(Z_i) = -\int_{-\infty}^0\Phi^i(t)dt + \int_0^\infty(1 - \Phi^i(t)) dt = \int_0^\infty[1 - \Phi^i(t) - (1 - \Phi(t))^i]dt, \end{align*} whence \begin{align*} E(Z_{k + 1}) - E(Z_k) = \int_0^\infty[(\Phi^k(t) - \Phi^{k + 1}(t)) + ((1 - \Phi(t))^k - (1 - \Phi(t))^{k + 1})] dt > 0, \end{align*} because for every $t > 0$, $\Phi^k(t) - \Phi^{k + 1}(t) > 0$ and $(1 - \Phi(t))^k - (1 - \Phi(t))^{k + 1} > 0$ and the integrand is continuous everywhere on $(0, +\infty)$.

Alternatively, if you are comfortable with measure-theoretic probability, then we can argue as @TonyK suggested in the comment. As $Z_{k + 1} - Z_k$ is non-negative, and \begin{align} & P(Z_{k + 1} - Z_k > 0) = P(X_{k + 1} > Z_k) \\ =& \int_{-\infty}^\infty P(Z_k < x)\phi(x)dx = \int_{-\infty}^\infty \Phi^k(x)\phi(x)dx = \frac{1}{k + 1} > 0, \end{align} it follows that (e.g., by Theorem $15.2$ (ii) in Probability and Measure by Billingsley) \begin{align*} E(Z_{k + 1}) - E(Z_k) = E(Z_{k + 1} - Z_k) =\int_\Omega (Z_{k + 1} - Z_k) dP > 0. \end{align*}