I seem to have a bit of confusion about this particular distribution , and I would appreciate if people could help me get past it.
My question are as follows.
Let $X$ be a discrete random variable.
$1)$The probability mass function of the negative binomial distribution is defined to be :$P(X=N)=C_{N-1}^{k-1}p^k(1-p)^{N-k}$ where $k$ is the number of successes , $N$ the number of trials and $p$ the probability of success.
Unlike the binomial distribution whose $pmf$ is referred to as : $P(X=k)$, we refer to the $pmf$ of the negative binomial distrubution as: $P(X=N)$.
This bugs me , is this due to the fact that k is held constant while the number of trials is changing ?
$2)$The expected value $E[X]$ of a negative binomial distribution is :
$E[x]=\frac{k}{p}$
I know this can be derived by by some algebraeic approach knowing that the $pmf$ in a negative binomial distribution sums to 1.
However can this be proved using the linearity of expected value , assuming each trial is a Bernoulli distribution?
3)What's confusing me the most is that the expected value $E[X]$ turned out to be independent of the number of trials.
Take for example:
A basketball player A scores his 3rd 3 points shot after 10 trials.
A basketball player B scores his 3rd 3 points shot after 100 trials.
The expected value $E[x]$ would be the same in both cases according to the formula?
I would be grateful to anyone who could clarify these concepts.
Thanks in advance.
1) That is just a matter of convention. Just don't allow it to confuse you.
2) You can write $X=X_1+\cdots X_k$ where the $X_i$ are iid with $X_1\sim\mathsf{Geometric}(p)$.
Then linearity of expectation combined with symmetry leads to $\mathsf EX=k\mathsf EX_1=\frac{k}{p}$.
3) $X$ is the number of trials.
In an expression $\mathsf EX=\dots$ you will never find an $X$ at the RHS (which would make it random).