Conditional expectation and computation of double integrals

150 Views Asked by At

I consider $X,Y$ two independent uniform on $[0,1]$ and $M=\min(X,Y)$.

I want to compute $\mathbb{E}(X^{2} | M]$ using the orthogonality relation that caracterizes the conditionnal expectation. For this I consider $Z$ a random variable $\sigma(M)$ measurable, we know it is a function of $M$ such that $Z = \psi(M)$. Thus we are seeking $\phi(M)$ satisfying for all $\psi$ measurable function the following relation

$$ \mathbb{E}(\psi(M)(X^{2} - \phi(M)) = 0 $$

By the transfert theorem and independence (and Fubini) we get

$$ \int_{\Omega} \psi(M)(X^{2} - \phi(M))d\mathbb{P} = \int_{\mathbb{R}}\int_{\mathbb{R}} \psi(m)(X^{2} - \phi(m))d\mathbb{P}_1 d\mathbb{P}_2 = \int_{0}^{1}\int_{0}^{1}\psi(m)(x^{2} - \phi(m))dxdy = 0 $$

where $m=\min(x,y)$. From there we can split the integral on $x$ as follows to get rid of the minimum

$$ \int_{0}^{1}\left(\int_{0}^{y}\psi(x)(x^{2} - \phi(x))dx + \int_{y}^{1}\psi(y)(x^{2} - \phi(y))dx \right)dy = \int_{0}^{1}\left(\int_{0}^{y}\psi(x)(x^{2} - \phi(x))dx +\psi(y) \left(\frac{1+y^3}{3} - (1-y)\phi(y)\right) \right)dy = 0 $$

I wanted to simply the two inner integral in order to have a unique expression under one integral but I am stuck for the first integral. I think my cutting is not good but I don't see which one applied, do you have any ideas?

I insist on this particular method please

Thank you a lot !

3

There are 3 best solutions below

4
On BEST ANSWER

You approach is good. What you need is to convert the double integral (or an expectation of two random variables) to a single integral (or an expectation of one random variable). Here is what I suggest:

  • Solution 1: based on your idea
  • Solution 2: the generic solution (longer than solution 1) for these kind of problems

Solution 1:

$$\begin{align} 0 &= \mathbb{E}(\psi(M)(X^{2} - \phi(M)) \\ &= \mathbb{E}(\psi(M)(X^{2} - \phi(M) \mathbf{1}_{\{X\ge Y \}})+\mathbb{E}(\psi(M)(X^{2} - \phi(M) \mathbf{1}_{\{X\le Y \}}) \\ &= \mathbb{E}(\psi(Y)(X^{2} - \phi(Y)) \mathbf{1}_{\{X\ge Y \}})+\mathbb{E}(\psi(X)(X^{2} - \phi(X)) \mathbf{1}_{\{X\le Y \}}) \\ &= \mathbb{E}(\psi(Y)X^{2}\mathbf{1}_{\{X\ge Y \}}) -\mathbb{E}(\psi(Y)\phi(Y) \mathbf{1}_{\{X\ge Y \}}) +\mathbb{E}(\psi(X)X^{2} \mathbf{1}_{\{X\le Y \}})-\mathbb{E}(\psi(X) \phi(X) \mathbf{1}_{\{X\le Y \}}) \\ \end{align}$$

We have:

$$\begin{align} \text{Term } 1 &= \mathbb{E}(\psi(Y)X^{2}\mathbf{1}_{\{X\ge Y \}}) = \mathbb{E}\left(\psi(Y) \int_Y^1x^2dx\right) =\mathbb{E}\left(\psi(Y)\frac{1-Y^3}{3}\right)\\ \text{Term } 2 &= \mathbb{E}\left(\psi(Y)\phi(Y) \int_X^1 dx\right) = \mathbb{E}\left(\psi(Y)\phi(Y) (1-Y)\right)\\ \text{Term } 3 &= \mathbb{E}(\psi(X)X^{2} \int_X^1dy) = \mathbb{E}(\psi(X)X^{2} (1-X))\\ \text{Term } 4 &= \mathbb{E}(\psi(X) \phi(X) \mathbf{1}_{\{X\le Y \}})=\mathbb{E}(\psi(X) \phi(X) \int_X^1dy)=\mathbb{E}(\psi(X) \phi(X) (1-X)) \end{align}$$

Then for all function $\psi$, we have

$$\begin{align} 0 &= \mathbb{E}\left(\psi(Y)\frac{1-Y^3}{3}\right)-\mathbb{E}\left(\psi(Y)\phi(Y) (1-Y)\right)+\mathbb{E}(\psi(X)X^{2} (1-X))-\mathbb{E}(\psi(X) \phi(X) (1-X))\\ &=\mathbb{E} \left( \psi(X)\frac{1-X^3}{3} - \psi(X)\phi(X) (1-X) + \psi(X)X^{2} (1-X) -\psi(X) \phi(X) (1-X) \right)\\ &=\mathbb{E} \left( \psi(X) \left( \frac{1-X^3}{3} - \phi(X) (1-X) + X^{2} (1-X) - \phi(X) (1-X) \right)\right) \\ &= \int_0^1 \psi(x) \color{red}{ \left( \frac{1-x^3}{3} - \phi(x) (1-x) + x^{2} (1-x) - \phi(x) (1-x) \right)}dx \tag{1} \end{align}$$

As $(1)$ holds true for all function $\psi$, we must have for all $x \in [0,1]$ $$\begin{align} &\frac{1-x^3}{3} - \phi(x) (1-x) + x^{2} (1-x) - \phi(x) (1-x) = 0\\ &\implies 2(1-x) \phi(x) = \frac{1-x^3}{3} + x^2(1-x) \\ &\implies \color{red}{\phi(x) = \frac{1+x+4x^2}{6}}\end{align}$$

Solution 2:

From the result of the conditional continuous distribution, we have $$f_{X|M}(x|m) =\frac{f_{X,M}(x,m)}{f_M(m)}$$ where

  • $f_{X|M}(x|m)$: the conditional density function of $X$ given $M$
  • $f_{X,M}(x,m)$: the joint density function of $X$ and $M$
  • $f_M(m)$: the density function of $M$

The density function $f_M(m)$ can be easily calculated:

$$f_M(m) = \frac{\partial}{\partial m}\mathbb{P}(M\le m)= \frac{\partial}{\partial m}\left(1- \mathbb{P}(M\ge m) \right) =\frac{\partial}{\partial m}\left(1- (1-m)^2 \right) =2(1-m)$$

And we find the joint density function $f_{X,M}(x,m)$ as follows: for any function $g(X,M)$, we have $$\begin{align} \mathbb{E}(g(X,M)) &= \mathbb{E}(g(X,M) \mathbf{1}_{\{X\ge Y \}})+\mathbb{E}(g(X,M) \mathbf{1}_{\{X\le Y \}})\\ &=\mathbb{E}(g(X,Y) \mathbf{1}_{\{X\ge Y \}})+\mathbb{E}(g(X,X) \mathbf{1}_{\{X\le Y \}})\\ &= \iint_{\{(x,y=\in [0,1]^2 \}} g(x,y)\mathbf{1}_{\{x\ge y \}}dxdy + \int_{\{(x\in [0,1] \}}g(x,x)\left( \int_x^1 dy \right) dx\\ &= \iint_{\{(x,y=\in [0,1]^2 \}} g(x,y)\mathbf{1}_{\{x\ge y \}}dxdy + \int_{\{(x\in [0,1] \}}g(x,x)(1-x) dx\\ &= \iint_{\{(x,y=\in [0,1]^2 \}} g(x,y)\mathbf{1}_{\{x\ge y \}}dxdy + \iint_{\{(x,y=\in [0,1]^2 \}}g(x,\color{red}{y})(1-x)\color{red}{\mathbf{1}_{\{x= y \}}} dx\\ &=\iint_{\{(x,y=\in [0,1]^2 \}} g(x,y) \underbrace{\left( \mathbf{1}_{\{x\ge y \}} + (1-x)\mathbf{1}_{\{x= y \}} \right)}_{ = f_{X,M}(x,y)}dxdy \end{align}$$ because $\mathbb{E}(g(X,M))$ is also equal to $\iint_{\{(x,y=\in [0,1]^2 \}} g(x,y) f_{X,M}(x,y) dxdy$ for all function $g(x,y)$.


Then $$\color{red}{f_{X|M}(x|m) =\frac{ \mathbf{1}_{\{x\ge m \}} + (1-x)\mathbf{1}_{\{x= m \}}}{2(1-m)}} \tag{1}$$ (the function $\mathbf{1}_{\{x=m \}}$ is the function Dirac, sometimes it is denoted by $\delta_{\{x=m \}}$)


Now, we compute $\mathbb{E}(X^2|M)$ by using $(1)$:

$$\begin{align} \mathbb{E}(X^2|M) &= \int_0^1 x^2 f_{X|M}(x|M)dx \\ &= \int_0^1 x^2 \cdot \frac{ \mathbf{1}_{\{x\ge M \}} + (1-x)\mathbf{1}_{\{x=M \}}}{2(1-M)} dx \\ &=\frac{1}{2(1-M)} \left(\int_M^1 x^2 dx + M^2(1-M) \right)\\ &= \frac{1}{2(1-M)} \left(\frac{1}{3}(1-M^3) dx + M^2(1-M) \right)\\ &= \frac{1+M+4M^2}{6} \end{align}$$

Remark: this technique can solved easily the kind of problems (for example: compute $\mathbb{E}(X^n|\min\{X,Y \})$, $\mathbb{E}(\max\{X,Y \}|\min\{X,Y \})$,...)

0
On

Here is my approach. Fix $m\in [0,1].$

Set $A_m=(m\times [m,1])\cup ([m,1]\times m)$.

Then $\{M=m\}=\{(X,Y)\in A_m\}$ so the conditional denisty of $(X,Y)$ given $M=m$ namely $f_{X,Y|A_m}$ is $$f_{X,Y|A_m}(x,y)=\frac{1}{2(1-m)}\cdot 1_{A_m}(x,y)$$ The $2(1-m)$ in the denominator normalizes the area of the region in the $xyz$ plane below the $z=f_{X,Y}(x,y)$ and above $A_m$.

So, $$\mathbb{E}(X^2|M=m)=\int_{y=m}^{y=1}m^2f_{X,Y|A_m}(m,y)\mathrm{dy}+\int_{x=m}^{x=1}x^2f_{X,Y|A_m}(x,m)\mathrm{d}x=\frac{ 4m^2+m+1 }{6}$$

We get $$\mathbb{E}(X^2|M)=\frac{4M^2+M+1}{6}$$

0
On

I did not understand the details of the question, so I did not check the correctness of the transformations below. But they provide the same value of $\psi(M)$ as Matthew H.'s answer.

$$ \int_{0}^{1}\left(\int_{0}^{y}\psi(x)(x^{2} - \phi(x))dx + \int_{y}^{1}\psi(y)(x^{2} - \phi(y))dx \right)dy = \int_{0}^{1}\left(\int_{0}^{y}\psi(x)(x^{2} - \phi(x))dx +\psi(y) \left(\frac{1\color{red}-y^3}{3} - (1-y)\phi(y)\right) \right)dy = 0.$$

$$\int_{0}^{1}\left(\int_{0}^{y}\psi(x)(x^{2} - \phi(x))dx\right)dy=$$ $$\int_{0}^{1} \int_{x}^{1} \psi(x)(x^{2} - \phi(x)) dy dx=$$ $$\int_{0}^{1} (1-x)\psi(x)(x^{2} - \phi(x))dx.$$

So

$$\int_{0}^{1}\left(\int_{0}^{y}\psi(x)(x^{2} - \phi(x))dx +\psi(y) \left(\frac{1-y^3}{3} - (1-y)\phi(y)\right) \right)dy=$$ $$\int_{0}^{1} (1-t)\psi(t)(t^{2} - \phi(t))+\psi(t) \left(\frac{1-t^3}{3} - (1-t)\phi(t)\right)dt.$$

As I understood, this expression equals $0$ for each measurable function $\psi$. So $$0=(1-t)(t^{2} - \phi(t))+\left(\frac{1-t^3}{3} - (1-t)\phi(t)\right)=(1-t)(t^{2} - 2\phi(t))+\frac{1-t^3}{3}.$$

When $t\ne 1$ we obtain

$$t^{2} - 2\phi(t)+\frac{1+t+t^2}{3}=0$$

$$\phi(t)=\frac{1+t+4t^2}{6}.$$