Finding the expected value and variance of binominal distribution with the help of Bernoulli distribution

47 Views Asked by At

Is it possible for us to find the variance/and or the expected value of a Binominal distribution using data from a Bernoulli distribution?

My thinking is this:

Suppose that $$ x_1,\ldots,x_n\sim Ber\left(p\right) $$

Let us create a new variable

$$ Y=\sum_{i=1}^nx_i $$

And let's find it's expected value:

$$ E\lbrack Y]=E\left\lbrack\sum_{i=1}^nx_i\right\rbrack=E\left\lbrack n\overline{x}\right\rbrack=nE\left\lbrack\overline{x}\right\rbrack=np $$

And now the variance:

$$ Var\lbrack Y]=Var\left\lbrack\sum_{i=1}^nx_i\right\rbrack=Var\left\lbrack n\overline{x}\right\rbrack=n^2Var\left\lbrack\overline{x}\right\rbrack=n^2p\left(1-p\right) $$

The conclusion should be that this is false. However my teacher says otherwise but does not show the proof. Is there something that is missing or is the hypothesis wrong?

Thank you in advance!

1

There are 1 best solutions below

0
On BEST ANSWER

As commented you made a mistake concerning the variance of $\overline x$.

It is not handsome here to replace $\sum_{i=1}^nx_i$ by $n\overline x$.

Make use of the fact that $\mathbb Ex_i$ and $\mathsf{Var}(x_i)$ do not depend on $i$.

Then in this situation:$$\mathbb E\left(\sum_{i=1}^nx_i\right)=n\mathbb Ex_1=np$$ and (preassuming that there is independence):$$\mathsf{Var}\left(\sum_{i=1}^nx_i\right)=n\mathsf{Var}(x_1)=np(1-p)$$ On base of that (not the other way around) we find:$$\mathsf{Var}(\overline x)=\mathsf{Var}\left(\frac1n\sum_{i=1}^nx_i\right)=\frac1{n^2}\mathsf{Var}\left(\sum_{i=1}^nx_i\right)=\frac1np(1-p)$$