Expectation Random Variables

228 Views Asked by At

Say $X$ to be uniformly distributed from $[0,1]$. Say $k_1$ and $k_2$ to be two non negative constants (that is, they take values from $[0,+inft]$.

I want to compute the expectation of the following max involving the two constants:

$$E[\max(X+k_1,k_2)]$$

(that is, the expectation of the max between the sum "random variable plus a constant k1" and "a constant k2").

As someone suggested:

let $Y=X+k_1$ be the uniform variable on $[k_1,k_1+1]$. Then your variable is $\max \{ Y,k_2 \}$. Cases:

  • If $k_2 < k_1$, then $\max \{ Y,k_2 \} = Y$, so you have $E(Y)=\frac{2k_1+1}{2}$.
  • If $k_2 > k_1+1$, then $\max \{ Y,k_2 \}=k_2$, so you have just $k_2$.
  • If $k_1 \leq k_2 \leq k_1+1$, then $\mathbb{P}(Y \leq k_2) = k_2-k_1$.

how to handle the last case?

2

There are 2 best solutions below

9
On BEST ANSWER

Let $Y=X+k_1$ be the uniform variable on $[k_1,k_1+1]$. Then your variable is $\max \{ Y,k_2 \}$. Cases:

  • If $k_2 < k_1$, then $\max \{ Y,k_2 \} = Y$, so you have $E(Y)=\frac{2k_1+1}{2}$.
  • If $k_2 > k_1+1$, then $\max \{ Y,k_2 \}=k_2$, so you have just $k_2$.
  • If $k_1 \leq k_2 \leq k_1+1$, then $\mathbb{P}(Y \leq k_2) = k_2-k_1$.

Can you handle the last case?

0
On

$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$

Note that $\ds{\max\pars{a,b} = {a + b + \verts{a - b} \over 2}}$. Then,

\begin{align}&\color{#66f}{\large{\mathbb E}\bracks{\max\pars{X + k_{1},k_{2}}}} ={\mathbb E}\bracks{X + k_{1} + k_{2} + \verts{X + k_{1} - k_{2}} \over 2} \\[5mm]&={1 \over 4} + {k_{1} + k_{2} \over 2} +\half\,\color{#c00000}{{\mathbb E}\pars{\verts{X + k_{1} - k_{2}}}} \end{align}

\begin{align}&\color{#c00000}{{\mathbb E}\bracks{\verts{X + k_{1} - k_{2}}}} =\int_{0}^{1}\verts{X + k_{1} - k_{2}}\,\dd X \\[5mm]&=\verts{1 + k_{1} - k_{2}} -\int_{0}^{1}X\sgn\pars{X + k_{1} - k_{2}}\,\dd X \\[5mm]&=\verts{1 + k_{1} - k_{2}} - \half\,\sgn\pars{1 + k_{1} - k_{2}} + \int_{0}^{1}X^{2}\,\delta\pars{X + k_{1} - k_{2}}\,\dd X \\[5mm]&=\verts{1 + k_{1} - k_{2}} - \half\,\sgn\pars{1 + k_{1} - k_{2}} + \pars{k_{2} - k_{1}}^{2} \int_{0}^{1}\delta\pars{X - \bracks{k_{2} - k_{1}}}\,\dd X \\[5mm]&=\color{#c00000}{% \verts{1 + k_{1} - k_{2}} - \half\,\sgn\pars{1 + k_{1} - k_{2}} + \pars{k_{2} - k_{1}}^{2}\Theta\pars{k_{2} - k_{1}}\Theta\pars{1 - k_{2} + k_{1}}} \end{align}

\begin{align} &\color{#66f}{\large{\mathbb E}\bracks{\max\pars{X + k_{1},k_{2}}}} \\[5mm]&=\color{#66f}{\large{1 \over 4} + {k_{1} + k_{2} \over 2} +\half\,\verts{1 + k_{1} - k_{2}} - {1 \over 4}\,\sgn\pars{1 + k_{1} - k_{2}}} \\[5mm]&+ \color{#66f}{\large\half\,\pars{k_{2} - k_{1}}^{2} \Theta\pars{k_{2} - k_{1}}\Theta\pars{1 - k_{2} + k_{1}}} \end{align}

$\ds{\delta\pars{x}}$ is the Dirac Delta Function and $\ds{\Theta\pars{x}}$ is the Heaviside Step Function .