Calculating joint probability density

112 Views Asked by At

Say I have $n$ independent random variable $X_1,...,X_n: \Omega \rightarrow \mathbb{R}$ on a probability space $\Omega$ with the same density $p:\mathbb{R}\rightarrow [0,\infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $\mathbb{R}^2\rightarrow [0,\infty)$ of $X_1$ and $t= \sum_{j=1}^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^{-x}$, and I'm supposed to get $$\frac{1}{\Gamma ( 2n-2)}x_1(t-x_1)^{2n-3}e^{-t}$$ for the density of the joint distribution.

1

There are 1 best solutions below

0
On

What if you just push the measure from $\mathbb{R}^n$ to $\mathbb{R}^n$ via $$L: (x_1,...,x_{n-1}, x_n)\mapsto (x_1,...,x_{n-1},\sum_{i=1}^n x_i)$$ and then project down $ \mathbb{R}^n\rightarrow \mathbb{R}^2$ via

$$\pi :(x_1,...,x_{n-1}, t)\mapsto (x_1,t).$$

Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $p\circ L^{-1}$ and then since $\pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.

You will need the following identity,

$$\frac{C^{2k+1}}{(2k+1)!} = \int_{A_k(C)}\big(C-\sum_{i=1}^k x_i\big)\cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$

where $$A_k(C):= {\{(x_1,...,x_k)\in \mathbb{R}^k:~~~~\sum_{i=1}^k x_i\leq C,~~\text{and}~~ x_i\geq 0~~\text{for all} ~i~\}}.$$

This can be proved by induction: $$\int_{A_k(C)}\big(C-\sum_{i=1}^k x_i\big)\cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$\int_0^C ~x_k \bigg( ~\int_{A_{k-1}(C-x_k)}\big((C-x_k)- \sum_{i=1}^{k-1} x_i\big)\cdot x_1x_2...x_{k-1}~~ dx_1dx_2...\bigg) ~dx_k=$$ $$\int_0^C ~x_k\frac{(C-x_k)^{2k-1}}{(2k-1)!}dx_k.$$

Then just integrate by parts.