Sum of two continuous Uniform $(0$,$1)$ random variables without convolution

1k Views Asked by At

We can transform one continuous multivariate distribution to another based on two chosen transformation functions, their inverses and derivatives. These course notes go through the process in detail, but to cut to the chase:

  • Start with two random variables $X_1$ and $X_2$.
  • Assume the associated bivariate probability density function is $f(x_1, x_2)$.
  • Choose two transformation functions $y_1(x_1, x_2)$ and $y_2(x_1, x_2)$.
  • Let the derived random variables be $Y_1 = y_1(X_1, X_2)$ and $Y_2 = y_2(X_1, X_2)$.
  • Assume the associated bivariate probability density function is $g(y_1, y_2$).
  • Assume the inverses of the two transformation functions are $x_1(y_1, y_2)$ and $x_2(y_1, y_2)$.
  • The relationship between $f(x_1, x_2)$ and $g(y_1, y_2)$ is: $$ g(y_1, y_2) = f(x_1(y_1, y_2), x_2(y_1, y_2) \cdot |J| $$

Where J is the determinant of the Jacobian matrix of $[x_1, x_2]$ with respect to $[y_1, y_2]$:

$$ J = \frac{∂(x_1, x_2)}{∂(y_1, y_2)} $$

  • As a special case, if $f(x1, x2)$ corresponds to a uniform distribution, the relationship is:

$$ g(y_1, y_2) = |J| $$

Suppose that $X_1$ and $X_2$ are continuous i.i.d. random variables distributed Uniform(0,1).

Let $Y_1 = X_1 + X_2$ and $Y_2 = X_1 - X_2$

Note that $Y_2$ can be anything since it's only $Y_1$ we care about.

As far as I can tell, this setup satisfies the requirements of the process outlined above. The transformation functions have inverses, they determine a bijective mapping between the domain of $f$ and the domain of $g$.

So it should be possible to derive first the joint distribution $g$ of $Y_1$ and $Y_2$ and then the marginal distribution of $Y_1$, which we know is triangular.

But when I go through the process that's not what I get. In fact, my joint distribution (which is just $|J|$) ends up being a constant $1/2$, which doesn't sum to $1$ over its domain and therefore is not even a valid density function.

I know that normally we would derive the sum of two Uniform random variables using convolution, but I'd like to know if it's possible to use the above process (or if not, I'd like to know why).

2

There are 2 best solutions below

5
On BEST ANSWER

Consider two independent random variables $X_1$ and $X_2$ with uniform distribution over the interval $[0,1]$. Thus we have $$ f_{X_i}(x) = \begin{cases} 1 & \mbox{if} \ x \in [0,1]\\ 0 & \mbox{otherwise} \end{cases} $$ for $i=1,2$. Consider also, e.g., the transformation \begin{eqnarray} Y_1 &=& X_1+X_2;\\ Y_2 &=& X_2; \end{eqnarray} Then, as stated by your notes, the joint distribution of $Y_1$ and $Y_2$ is given by $$ f_{Y_1,Y_2}(u,v) = \begin{cases} f_{X_1,X_2}(u-v,v) =f_{X_1}(u-v)\cdot f_{X_2}(v) & \mbox{if} \ \ (u,v)\in [0,1]\times[0,1]\\ 0 & \mbox{otherwise}, \end{cases} $$ where we used independence of $X_1$ and $X_2$.

Because you're interested only in the marginal distribution of $Y_1$ we get $$ f_{Y_1}(u) = \int_{0}^1 f_{X_1}(u-v)f_{X_2}(v) dv, $$ that is exactly the convolution you were expecting.


EDIT As you requested, I little bit more details. If you use the transformations \begin{eqnarray} Y_1 &=& g_1(X_1,X_2)\\ Y_2 &=& g_2(X_1,X_2) \end{eqnarray} you first need to invert them and get \begin{eqnarray} X_1 &=& h_1(Y_1,Y_2)\\ X_2 &=& h_2(Y_1,Y_2). \end{eqnarray} Only now, I would recommend, can you

  1. Calculate, from $h_1(u,v)$ and $h_2(u,v)$, $$J = \left|\begin{array}{cc} \frac{\partial h_1}{\partial u} & \frac{\partial h_1}{\partial v} \\ \frac{\partial h_2}{\partial u} & \frac{\partial h_2}{\partial v}\end{array} \right|$$
  2. Determine the joint distribution of $(Y_2,Y_1)$, that, limited to the correct domain, will be given by $$f_{Y_1,Y_2}(u,v) = |J|\cdot f_{X_1,X_2}(h_1(u,v),h_2(u,v)).$$

In my example \begin{eqnarray} X_1 &=& Y_1-Y_2\\ X_2 &=& Y_2 \end{eqnarray} are the inverse transformations, so that \begin{eqnarray} h_1(u,v) &=& u-v\\ h_2(u,v) &=& v; \end{eqnarray}

5
On

Yes, you can of course use that approach to get the joint density function, but I don't think your way would be easier to calculate in general.

Regarding your question on $g(y1, y2)$ not being a valid density function. You actually made a mistake here. $g(y_1, y_2)/f(x_1, x_2) = 1/|J|$ rather than $J$.

Just take an example of a single-variable density function ($C$ is the cumulative distribution function) $$ p(x) = \frac{dC}{dx}$ $$ $$ p(y) = \frac{dC}{dy} = \frac{dC}{dx} \cdot \frac{dx}{dy} = \frac{p(x)}{y'(x)} $$