Consistent estimator of $p$ of Bernoulli trial

202 Views Asked by At

Show that $Y_n:=\frac{1}{n+2}\left(1+\sum\limits_{i=1}^n X_i\right)$ is a consistent estimator of the success probability $p$. Here the $X_i$ denote $iid$ random variables that obey a Bernoulli distribution.


My approach:

Let be $\epsilon$ arbitrary but $\epsilon>0$. Then we find a $\delta$ such that $\epsilon>\delta>0$ and $$ P(|Y_n-p|>\epsilon)\leq P(|Y_n-p|>\delta)=P\left(\left|\frac{1+\sum\limits_{i=1}^nX_i-np-2p}{n+2}\right|>\delta\right)\leq P\left(\left|\frac{\sum\limits_{i=1}^nX_i}{n}-p\right|>\delta-\frac{1}{n}\right). $$ Now, we choose $n$ such that at least $n>\frac{2}{\delta}$. By Weak Law of Large Numbers we know that for some arbitrary $c>0$ there exists a $n_0$ such that for all $n>\max\left\{n_0,\frac{2}{\delta}\right\}$ it follows $$ P(|Y_n-p|>\epsilon)\leq P\left(\left|\frac{\sum\limits_{i=1}^nX_i}{n}-p\right|>\delta-\frac{1}{n}\right)\leq P\left(\left|\frac{\sum\limits_{i=1}^nX_i}{n}-p\right|>\frac{\delta}{2}\right)<c. $$ Hence, $\lim\limits_{n\to\infty}P(|Y_n-p|>\epsilon)=0$.


My tutor said that I can't show the consistency of the estimator this way but I didn't understand what exactly is wrong with this approach. Maybe someone can give me feedback.