Problem:
I have a sum of $N$ random variables $X_i$, where $N$ is distributed according to a binomial distribution and the $X_i$ are independent and identically distributed according to an exponential distribution. I would like to get the pdf of the sum
$$ Y_N = X_1 + X_2 + \dots + X_N. $$
If I write the pmf of $N$ as $$ p_n = {N \choose n} p^n q^{N-n}$$ and the pdf of the $X_i$ as $$ f(x) = \lambda e^{-\lambda x},$$ How can I get the pdf $p(y)$ of $Y_N$?
Attempt:
I gather this is called a compound process. I read about it somewhat in Feller: the section is "Sum of a Random Number of Random Variables", but he does not mix discrete and continuous random variables. I think this is just a discrete time random walk where the steps are exponentially distributed.
I think I can condition something like $$\text{Prob}(Y_N = y) = \sum_{n=0}^N \text{Prob}(N=n)\text{Prob}(X_1 + \dots +X_n = y),$$ and this will become
$$p(y) = \sum_{n=1}^N p_n f^{n*}(y),$$
but I'm not actually sure how to write the $n$-fold convolution of exponentials (although this should be a Gamma distribution I think).
So, first interjection: is it this?
$$ f^{n*}(y) = \int dx_1\dots dx_n f(x_1)f(x_2-x_1)f(x_3-x_2-x_1)\dots f(y-x_n-\dots x_1)?$$
or is it this?
$$ f^{n*}(y) = \int dx_1\dots dx_n f(y-x_n)f(x_n-x_{n-1})\dots f(x_2-x_1)f(x_1)?$$
Then I know I can take a laplace transform to simplify the convolution (although I'm not sure what the convolution should look like exactly yet), to get something like
$$ p(s) = \sum_{n=0}^N p_n f(s)^n$$
Does this make sense? Then this is the probability generating function of $p_n$ evaluated at $f(s)$:
$$ G(z) = \langle z^n \rangle|_{z=f(s)} = \sum_{n=0}^N z^n p_n |_{z=f(s)}$$
So the mgf for $ p(y)$ is the pgf for $p_n$ evaluated at the mgf for f(x): Is this correct?
(using this link to keep my words straight)
Following through, the pgf of the binomial distribution $p_n$ is
$$ G(z) = \sum_{i=0}^N {N \choose i} (zp)^i q^{N-i} = (zp + q)^N,$$
(verified correct here) and the mgf of the exponential distribution $f(x)$ is
$$ f(s) = \int_0^\infty dx \lambda e^{(s-\lambda)x} = \frac{\lambda}{\lambda -s} $$
(verified correct here). So the mgf for $p(y)$ will be
$$ p(s) = (p\frac{\lambda}{\lambda-s}+q)^N. $$
So for example the mean value of $Y$ should be
$$\mu_Y = \frac{d}{ds}(p\frac{\lambda}{\lambda-s}+q)^N|_{s=0} = N (p\frac{\lambda}{\lambda-s}+q)^{N-1} p\frac{\lambda}{(\lambda-s)^2}|_{s=0} = Np\lambda^{-1}. $$ (Fixed an error thanks to Clement)
The probability distribution should be the inverse fourier transform of the characteristic function $p(s=ik)$:
$$p(y) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} e^{-iky}(\frac{p\lambda}{\lambda-ik}+p)^N$$
Any thoughts on this integral are appreciated !