Identifying the given PDF?

57 Views Asked by At

I came across the following expression of PDF of $n$ exponential random variables.

$f_X(x) = \sum_{n=1}^N$ $\binom{N}{n}$ $\frac{n(-1)^{n-1}}{\Omega}\exp(\frac{-nx}{\Omega})$

My query is how this PDF is representing $n$ exponential random variables.

1

There are 1 best solutions below

1
On BEST ANSWER

The given function is the PDF for the maximum of $n$ independent and identically distributed exponential random variables $X_i \sim \text{Exp}(\lambda).$ To begin, let's show that

$$\mathsf{Pr}(\max X_i \leq x) = \sum_{k = 0}^n {n \choose k} (-1)^k e^{-\lambda k x}$$

We can do this first by noting that the event $\max X_i \leq x$ is equivalent to $\bigcap_{k = 1}^n [X_k \leq x].$ From here we can proceed either by using the independence of the variables and the binomial theorem:

$$\begin{align}\mathsf{Pr}\left(\bigcap_{k = 1}^n [X_k \leq x]\right) & = \mathsf{Pr}(X_k \leq x)^n \\ & = \left(\int_0^x \lambda e^{-\lambda t} dt\right)^n \\ & = \left(- e^{-\lambda x} + 1\right)^n \\ & = \sum_{k = 0}^n{n \choose k}\left(-e^{-\lambda x}\right)^k1^{n - k} \\ & = \sum_{k = 0}^n {n \choose k} (-1)^k e^{-\lambda k x}\end{align}$$

or if you prefer then we can use the law of complements and the inclusion-exclusion principle, noting that we can treat all intersections of $k$ events $X_i > x$ as the same because the $X_i$ are identical.

$$\begin{align}\mathsf{Pr}\left(\bigcap_{k = 1}^n [X_k \leq x]\right) & = 1 - \mathsf{Pr}\left(\bigcup_{k = 1}^n [X_k > x]\right)\\ & = 1 - \sum_{k=1}^n {n \choose k}(-1)^{k-1} \mathsf{Pr}\left(\bigcap_{j = 1}^k [X_j > x]\right) \\ & = 1 + \sum_{k=1}^n {n \choose k}(-1)^{k} \left(e^{-\lambda x}\right)^k \\ & = \sum_{k = 0}^n {n \choose k} (-1)^k e^{-\lambda k x}\end{align}$$

Either way, if we take derivatives of both sides you'll see that we get the series you mentioned as the PDF for the maximum:

$$\begin{align}p_{\max}(x) = \frac{d}{dx} \mathsf{Pr}(\max X_i \leq x) & = \frac{d}{dx}\sum_{k = 0}^n {n \choose k} (-1)^k e^{-\lambda k x} \\ & = \sum_{k = 0}^n {n \choose k} (-1)^k \frac{d}{dx}e^{-\lambda k x} \\ & = \sum_{k = 0}^n {n \choose k} (-1)^k \left(-\lambda k e^{-\lambda k x}\right) \\ & = \sum_{k = 0}^n {n \choose k} (-1)^{k+1} \lambda k e^{-\lambda k x}\end{align}$$

noting that we can pull the derivative into the sum without restriction, as it is a finite sum. This series is equivalent to your series if we let $\lambda = \frac1{\Omega}.$ (to account for the slight difference in convention)


For reinforcement I've also derived for you the PDF of the sum of $n$ i.i.d. exponential variables to show that it is different. I think it helps to start by looking at the CDF, and then differentiate to get the PDF. If $X_i$ are $n$ independent variables such that $X_i \sim \text{Exp}(\lambda),$ then

$$\begin{align}\mathsf{Pr}\left(\sum_{k = 1}^n X_i \leq x\right) & = \int_{\{(t_1,t_2,\ldots t_n) : \sum t_k \leq x\}} \prod_{k = 1}^n \lambda e^{-\lambda t_k} dt^n\\ & = \int_0^x \int_0^{x - t_1}\int_0^{x - t_1 - t_2} \cdots \int_0^{x - \sum_{k = 0}^{n-1} t_k} \lambda^n e^{-\lambda \sum t_k} dt_n dt_{n-1} \ldots dt_1\end{align}$$

Now, we can consider taking derivatives on both sides with respect to $x.$ Doing this on the right is a bit tricky, but we can apply the Leibniz integral rule.

Notice that for every integral except the last, because of how the bounds are set up, if we evaluate the next integral at the upper bound, we will get zero. For example, letting $t_1 = x,$ the upper bound for the next integral would be $x - t_1 = t_1 - t_1 = 0.$ Letting $t_2 = x - t_1$ will cause $x - t_1 - t_2 = x - t_1 - (x - t_1) = 0,$ and so on. Because of this, by repeatedly using the Leibniz rule we can pass the derivative all the way to the innermost integral:

$$\begin{align}p_{\text{sum}}(x) = \frac{d}{dx} \mathsf{Pr}\left(\sum_{k = 1}^n X_i \leq x\right)& = \frac{d}{dx} \int_0^x \int_0^{x - t_1}\int_0^{x - t_1 - t_2} \cdots \int_0^{x - \sum_{k = 0}^{n-1} t_k} \lambda^n e^{-\lambda \sum t_k} dt_n dt_{n-1} \ldots dt_1 \\ & = \left[ \int_0^{x - t_1} \cdots \right]_{t_1 = x} + \int_0^x \frac{d}{dx} \int_0^{x - t_1}\int_0^{x - t_1 - t_2} \cdots \int_0^{x - \sum_{k = 0}^{n-1} t_k} \lambda^n e^{-\lambda \sum t_k} dt_n dt_{n-1} \ldots dt_1 \\ & = \int_0^x \left[\int_0^{x - t_1 - t_2} \cdots\right]_{t_2 = x - t_1} + \int_0^x \int_0^{x - t_1}\frac{d}{dx}\int_0^{x - t_1 - t_2} \cdots \int_0^{x - \sum_{k = 0}^{n-1} t_k} \lambda^n e^{-\lambda \sum t_k} dt_n dt_{n-1} \ldots dt_1 \\ \cdots & = \int_0^x \int_0^{x - t_1}\int_0^{x - t_1 - t_2} \cdots \frac{d}{dx} \int_0^{x - \sum_{k = 0}^{n-1} t_k} \lambda^n e^{-\lambda \sum t_k} dt_n dt_{n-1} \ldots dt_1 \\ & = \int_0^x \int_0^{x - t_1}\int_0^{x - t_2}\cdots\int_0^{x - \sum_{k = 1}^{n-2} t_k} \lambda^n e^{-\lambda x} dt_{n-1}dt_{n-2}\ldots dt_1 \\ & = \int_0^{x - \sum_{k = 0}^{n-1} t_k} \lambda^n e^{-\lambda \sum t_k} dt_n dt_{n-1} \ldots dt_1 \\ & = \lambda^n e^{-\lambda x}\int_0^x \int_0^{x - t_1}\int_0^{x - t_2}\cdots\int_0^{x - \sum_{k = 1}^{n-2} t_k} dt_{n-1}dt_{n-2}\ldots dt_1\end{align}$$

Now by repeating the same trick and noting that the innermost integrand again doesn't depend on $x$ we can show that if $f(x)$ is our remaining integral expression, then $f^{[k]}(0) = 0$ for $k \neq n - 1$ and $f^{[n-1]}(0) = 1,$ so $f(x) = \frac{x^{n-1}}{(n-1)!},$ giving us that

$$p_{\text{sum}}(x) = \frac{\lambda^n}{(n-1)!} x^{n-1} e^{-\lambda x}$$

which you may recognize as a gamma distribution.


(Edit: just as a note, both of these calculations assume $x > 0.$ If $x \leq 0,$ then both PDFs are $0$ because all of the relevant probabilities for the exponential distribution are $0.$)