Moment Generating Function - Negative Binomial - Alternative Formula

3.7k Views Asked by At

I was given the mgf with parameters 'p' and 'k' for the Negative Binomial distribution as:

$$M_X(t)=\left(\frac{p}{1-(1-p)e^t}\right)^k$$

However as far as I understand the mgf should be:

$$M_X(t)=\left(\frac{pe^t}{1-(1-p)e^t}\right)^k$$

so I have two questions I have:

1) Am I right about my observation or am I missing something here

2) If you agree that the first formula is incorrect then would you be able to suggest how to prove that it is wrong?

Many thanks all contributions in advance.

K

1

There are 1 best solutions below

0
On BEST ANSWER

There are at least two slightly different distributions, called "Geometric" distributions, and therefore there are at least two different distributions, called "Negative Binomial" distributions. Really, there are much more options, but we will consider only those two that lead to these two mgf.

1st type. Let $Y$ be the number of failures preceding the first success in a sequence of independent Bernoulli trials with probability of success $p$. Then $$ \mathbb P(Y=n)=p(1-p)^n,\ n=0,1,\ldots, \quad M_Y(t)=Ee^{tY} = \sum_{n=0}^\infty e^{tn}p(1-p)^n = \dfrac{p}{1-(1-p)e^t}. $$

If we continue Bernoulli trials after the first success, we will be able to observe the number of failures between the 1st and 2nd successes, then between the 2nd and 3rd, etc. We get the sequence of independent r.v.'s $Y_1, Y_2,\ldots$ distributed as described above. The value $$X=Y_1+\ldots+Y_k$$ is equal to the random number of failures we have seen if we are observing Bernoulli trials until $k$ of successes has occurred. This is one of numerous variants of negative binomial distribution. For this $X$ $$\mathbb P(X=n)=\binom{n+k-1}{k-1}p^k(1-p)^n,\ n=0,1,2,\ldots, \quad M_X(t)=\left(M_{Y_1}(t)\right)^k=\left(\dfrac{p}{1-(1-p)e^t}\right)^k.$$

2nd type. Let $Z$ be the number of the first successful trial. In the same Bernoulli trials, the number of failures before the first success $Y$ and the number of the first success $Z$ differ by one. $Z=Y+1$. Then $$ \mathbb P(Z=n)=p(1-p)^{n-1},\ n=1,2\ldots, \quad M_Z(t)=Ee^{tZ} = \sum_{n=1}^\infty e^{tn}p(1-p)^{n-1} = \dfrac{pe^t}{1-(1-p)e^t}. $$

If we get a sequence of independent r.v.'s $Z_1, Z_2,\ldots$ distributed as described above, we can defined the $$X'=Z_1+\ldots+Z_k=X+k.$$ $X'$ is the number of trials that need to be carried out to wait for the $k$th success. $$\mathbb P(X'=n)=\binom{n-1}{k-1}p^k(1-p)^{n-k},\ n=k,k+1,k+2,\ldots, \quad M_{X'}(t)=\left(M_{Z_1}(t)\right)^k=\left(\dfrac{pe^t}{1-(1-p)e^t}\right)^k.$$

You can simply connect this two mgf's: given $M_X(t)$, mgf of $X+a$ is equal to $$ M_{X+a}(t)=\mathbb Ee^{t(X+a)} = e^{at}\mathbb Ee^{tX}=e^{at}M_X(t). $$ Comparing two given mgf's $$ \left(\dfrac{pe^t}{1-(1-p)e^t}\right)^k = e^{kt} \left(\dfrac{p}{1-(1-p)e^t}\right)^k $$ conclude that there are mgf's of random variables that differ by $k$.

Answering your questions: both formulas are correct, both are mgf for negative binomial distributions.