Sum of $r$ independent gamma random variables - p.d.f. technique.

64 Views Asked by At

Let $X_1, X_2, \ldots, X_r$ be $r$ independent gamma variables with parameters $\alpha = \alpha_i $ and $\beta = 1$, $i = 1, 2, \ldots, r$, respectively. Show that $Y_1 = X_1 + X_2 + \cdots + X_r$ has a gamma distribution with parameters $\alpha = \alpha_1 + \alpha_2 + \cdots + \alpha_r$ and $\beta = 1$.

My approach for this problem is to use the p.d.f. technique, which requires me to define the following variables:

\begin{align*} Y_2 &= X_2 + \cdots + X_r \\ Y_3 &= X_3 + \cdots + X_r \\ &\vdots\\ Y_r &= X_r \end{align*}

If we let $\mathscr{A} = \{(x_1, x_2, \ldots, x_r): 0 < x_i < \infty, i = 1, 2, \ldots, r\}$ be the space of $X_1, X_2, \ldots, X_r$, we will have a space for $Y_1, Y_2, \ldots, Y_r$, call it $\mathscr{B}$, which will be defined by the above equations.

My issue is correctly defining set $\mathscr{B}$ such that we have a one-to-one transformation of $\mathscr{A}$ onto $\mathscr{B}$. I was thinking $\mathscr{B} = \{(y_1, y_2, \ldots, y_r): 0 < y_i < \infty, i = 1, 2, \ldots, r \}$, which I think is correct, but isn't as specific as I need in order to do the p.d.f. technique.

Any suggestions on how to approach this would be greatly appreciated.

1

There are 1 best solutions below

0
On BEST ANSWER

You can assume $r = 2$ since the general case can be deduced from this one by induction. The density of $X_1 + X_2$ is $f_{X_1} * f_{X_2}$, which is an integral that can be computed (I don't know how difficult it is though): $$f_{X_1} * f_{X_2}(x) = \int_{0}^{x}f_{X_1}(y)f_{X_2}(x - y)\,dy.$$

A more fun way is to first compute the characteristic function of a general Gamma($\alpha, \beta$) random variable $X$ (assume $\alpha, \beta > 0$). That is, for $t\in\mathbb{R}$, compute $$\phi_{\alpha, \beta}(t) = E(e^{itX}) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\int_{0}^{\infty} x^{\alpha - 1}e^{-\beta x}e^{itx}\,dx = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\int_{0}^{\infty} x^{\alpha - 1}e^{-(\beta - it)x}\,dx.$$ The answer is, as you might formally guess, $$\phi_{\alpha, \beta}(t) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\frac{\Gamma(\alpha)}{(\beta - it)^{\alpha}}=\left(\frac{\beta}{\beta - it}\right)^{\alpha}.$$ This "formal guess" really is asserting that the identity $$\int_{0}^{\infty} x^{\alpha - 1}e^{-\lambda x}\,dx = \frac{\Gamma(\alpha)}{\lambda^{\alpha}},$$ which we already know holds for $\lambda > 0$, continues to hold for $\lambda \in \mathbb{C}$ with $\text{Re}(\lambda) > 0$. This can be proven using the identity theorem: the lhs and rhs both define holomorphic functions of $\lambda$ on the right half plane $\{\lambda \in \mathbb{C} : \text{Re}(\lambda) > 0\}$ and they agree on the set $(0, \infty)$ which has a limit point.

After you have this, you can observe that for all $t \in \mathbb{R}$, $$E(e^{it(X_1 + X_2)}) = E(e^{itX_1})E(e^{itX_2}) = \phi_{\alpha_1, \beta}(t)\phi_{\alpha_2, \beta}(t) = \phi_{\alpha_1 + \alpha_2, \beta}(t).$$ Then by the Fourier inversion theorem, equality of characterstic functions implies equality of distributions, so $X_1 + X_2$ has the Gamma($\alpha_1 + \alpha_2, \beta$) distribution.