Toss all $n$ coins, if at least one comes up heads, the experiment ends; if not, toss all $n$ coins, and so on

481 Views Asked by At

The following question is from Ross' A first course on probability, and it is a theoretical exercise in the fourth chapter on random variables (the exact question number varies with the edition).

Consider $n$ coins, each of which independently comes up heads with probability $p.$ Suppose that $n$ is large and $p$ is small, and let $\lambda = np.$ Suppose that all $n$ coins are tossed; if at least one comes up heads, the experiment ends; if not, we again toss all $n$ coins, and so on. That is, we stop the first time that at least one of the $n$ coins come up heads. Let $X$ denote the total number of heads that appear. Which of the following reasonings concerned with approximating $P(\{ X=1\})$ is correct (in all cases, $Y$ is a Poisson random variable with parameter $\lambda$)?

Now, I have seen the solution from Ross, but it seems rather handwavy and unsatisfactory. In my take, there is the first subexperiment, where we toss all $n$ coins. Accordingly, the first subexperiment yields one of $2^{n}$ outcomes, where an outcome belongs to $\{0, 1 \}^{n},$ so it is a Boolean string with length $n$ such that $0$ denotes a tail and $1$ denotes a head. Moreover, if the outcome is any string besides the all zeros string, then the experiment is over.

Henceforth, let us suppose the first subexperiment yields the all zeros string. Then, we perform the second subexperiment, and we append the outcome to the right of the all zeros string, so we have $2^{n}$ strings with length $2n$ such that the first $n$ characters are $0.$ As such, we move on to the third subexperiment, when the second subexperiment yields the all zeros string of length $2n,$ and so on.

Now, these previous two paragraphs merely rephrase the question. My issue is how do we justify the Possion approximation? That is, if we only had to deal with the first subexperiment in which we toss all $n$ coins and stop, then the random variable $X$ is binomial and the approximation is straightforward. But we may have to do a second subexperiment, or more, so we do not have a fixed number of trials, which is a parameter that we require in order to describe a Binomial random variable. As such, $X$ does not seem to be a binomial random variable. Instead it looks at strings of length $kn,$ where $k$ is a natural number, and it counts the number of $1$'s that occur. So, the event $\{ X=1\}$ contains $n$ outcomes from the first subexperiment, $n$ outcomes from the second subexperiment, and so on. But I do not know how to proceed from here, or elsewhere, to fill in the gaps in the solution.

Also, I picture this experiment as a node with $2^{n}$ branches, where the leaves correspond to the first subexperiment outcomes. Then, the leaf, which corresponds to the all zeros string of length $n$ is a node with $2^{n}$ branches, and so on. Does this seem right?

1

There are 1 best solutions below

1
On

Note that $P(X=k)$ doesn't depends on the number of times you throw the coins before that the game ends, and just the last throw matters. However $X$ is not binomial as $P(X=0)=0$ because the event $\{X=0\}$ means that the game never ends, what can be shown that have probability zero. If we didn't have the game-ending-behavior then $X$ would be binomial, so we can expect that if $(1-p)^n$ is very small (that is, if the probability of no heads if $X$ would be binomial is very small) then $P(X=k)\approx \binom{n}{k}p^k(1-p)^{n-k}$.

Therefore, to see if the Poisson approximation will be good for $X$ we need to see if when $(1-p)^n$ is very small the Poisson approximation is good.