Conditional expectation $E[\max{\{X_{0},X_{1},\ldots,X_{n}\}}|X_{0}]$

260 Views Asked by At

Let $X_{0}, X_{1}, \ldots, X_{n}$ be independent random variables with uniform distribution on $[0,1]$.

How can I compute $E[\max{\{X_{0},X_{1},\ldots,X_{n}\}}\mid X_{0}]$? Does the answer depend on $n$?

2

There are 2 best solutions below

1
On BEST ANSWER

Denote $\max\{X_0, X_1, \ldots, X_n\}$ by $M_n$, we first find the conditional probability $P[M_n > x \mid X_0]$ for $x \in (0, 1)$. Intuitively, $P[M_n \leq x \mid X_0](\omega) = I_{[X_0 \leq x]}(\omega)x^n$. Therefore, an intuitive solution is as follows: \begin{align} & E[M_n \mid X_0] \\ = & \int_0^1 P[M_n > x \mid X_0] dx \\ = & 1 - \int_{X_0}^1 x^n dx\\ = & \boxed{\frac{n}{n + 1} + \frac{1}{n + 1}X_0^{n + 1}} \tag{$*$} \end{align}

It remains to show that the right hand side of $(*)$ is indeed a version of $E[M_n \mid X_0]$. To check this, we need to show for each $H \in \sigma(X_0)$, we have $$\int_H M_n dP = \int_H \left[\frac{n}{n + 1} + \frac{1}{n + 1}X_0^{n + 1}\right] dP \tag{$**$}$$ Since the general element in $\sigma(X_0)$ is $[X_0 \leq x_0]$, we will show $(**)$ for $[X_0 \leq x_0]$ where $x_0 \in (0, 1]$. The right hand side of $(**)$ equals to, by change-of-variable formula \begin{align} \frac{n}{n + 1}x_0 + \frac{1}{(n + 1)(n + 2)}x_0^{n + 2} \end{align} To calculate the left hand side of $(**)$, denote $Y_n = \max\{X_1, \ldots, X_n\}$ so that $M_n = \max\{X_0, Y_n\}$. In addition, $X_0$ and $Y_n$ are independent. By independence of $X_0$ and $Y_n$, there exists on $(0, 1] \times (0, 1]$ a product measure $\pi = \mu_{X_0} \times \mu_{Y_n}$. Also it is easy to see that $\mu_{X_0}$ has density $1$, $\mu_{Y_n}$ has density $ny^{n - 1}$ with respect to the linear Lebesgue measure. It then follows by Tonelli's theorem (also change-of-variable formula) that \begin{align} & \int_{X_0 \leq x_0} \max\{X_0, Y_n\} dP \\ = & \int_\Omega \max\{X_0(\omega), Y_n(\omega)\}I_{(0, x_0]}(X_0(\omega)) P(d\omega) \\ = & \iint_{(0, 1] \times (0, 1]} \max\{x, y\}I_{(0, x_0]}(x)\pi(d(x, y)) \\ = & \iint_{(0, 1] \times (0, 1]} \max\{x, y\}I_{(0, x_0]}(x) ny^{n - 1}dxdy \\ = & \int_0^1 \left[\int_0^1 \max\{x, y\}ny^{n - 1} dy\right]I_{(0, x_0]}(x) dx \\ = & \int_0^{x_0} \left[\int_0^x x ny^{n - 1} dy + \int_x^1 y ny^{n - 1} dy\right] dx \\ = & \frac{n}{n + 1}x_0 + \frac{1}{(n + 1)(n + 2)}x_0^{n + 2}. \end{align} Therefore, the answer in the box is verified.

6
On

Let $M_{n}=\max{\{X_{1},\ldots,X_{n}\}}$. The CDF of $M_{n}$ is $F_{M_{n}}(t)=t^{n}$ for $t\in[0,1]$ hence the density of $M_{n}$ is $f_{M_{n}}(t)=n\cdot t^{n-1}$ for $t\in[0,1]$. Note that: $$E[\max{\{X_{0},X_{1},\ldots, X_{n}\}}|X_{0}]=E[\max{\{X_{0},\max{\{X_{1},\ldots, X_{n}\}}\}}|X_{0}]=E[\max{\{X_{0},M_{n}\}}|X_{0}]$$ hence $$E[\max{\{X_{0},X_{1},\ldots, X_{n}\}}|X_{0}]=\int_{0}^{1}\max{\{X_{0},t\}}\cdot n\cdot t^{n-1}dt$$ The integral in the RHS is $$\int_{0}^{X_{0}}X_{0}\cdot n\cdot t^{n-1}dt+\int_{X_{0}}^{1}n\cdot t^{n} dt=X_{0}^{n+1}-0+\frac{n}{n+1}(1-X_{0}^{n+1})=\frac{n}{n+1}+\frac{1}{n+1}X_{0}^{n+1},$$ which concludes the proof. Is this a correct attempt?