Asymptotically unbiased estimator for 1/p in Bernouilli distribution?

650 Views Asked by At

Suppose I have a sample of $n$ independent stochastic variables, each Bernouilli distributed with parameter $p$ (you may assume $0 < p <1$). I was wondering if there exist (asymptotically) unbiased estimators for $1/p$ and, if there exist multiple, which are to be preferred. In this question, it is explained why no estimator can exist, which is unbiased for infinitely many $p$. This answers the question of unbiased estimation negatively, so now I'm still hoping for asymptotically unbiased estimation (i.e. the bias tends to $0$ as $n$ tends to $+\infty$).

$1/\bar{X}_n$, where $\bar{X}_n$ is the (binomially distributed) sample mean, seems to be an obvious choice. There is of course the problem that $\bar{X}_n$ might become zero (with non-zero probability), but since the probability that this happens tends to zero as $n$ grows, I presume one could take $$ T = \begin{cases}1/\bar{X}_n \quad n \neq 0 \\ \omega_n \quad n = 0 \end{cases} $$ for some fixed value $\omega_n \geq n$. Even then, I wouldn't know if this estimator $T$ would have any desirable properties if $\omega_n$ is chosen adequately, or even whether we can choose $\omega_n$ such that $T$ is asymptotically unbiased.

Any thoughts you have on this are welcome!

2

There are 2 best solutions below

3
On BEST ANSWER

To avoid problems with $\overline{X_n} = 0$, it might be simpler to take $$ T_n = \dfrac{1}{\epsilon_n + \overline{X}_n} = \dfrac{n}{n \epsilon_n + S_n} $$ for some $\epsilon_n > 0$, where $S_n = \sum_{j=1}^n X_j$ has binomial distribution with parameters $n$ and $p$. In fact I will take $\epsilon_n = 1/n$. Then $$ \eqalign{\mathbb E[T_n] &= \dfrac{(1-(1-p)^{n+1}) n}{(n+1)p}\cr &\to \dfrac{1}{p} \ \text{as $n \to \infty$} \cr} $$ so this is asymptotically unbiased. To make it even less biased, it would be better to multiply by $1 + 1/n$.

0
On

Let $k$ be the number of successes. An estimator with a nice interpretation and the desired property is $T_n = \frac{n+2}{k+1}$.

If we, in a Bayesian approach, assume that $p$ is randomly sampled from a prior uniform distribution in $(0,1)$, then the expected value of $p$ is precisely $\frac{1}{T_n}$ (note that this is equivalent to computing the sample mean with one more success and one more failure).

The expected value of $T_n$ is $$\mathrm{E}T_n = \frac{n+2}{n+1}\cdot\frac{1 - (1-p)^{n+1}}{p} \rightarrow \frac{1}{p}$$