Consider an IID (independent and identically distributed) sequence of random variables denoting the result of Bernoulli trials $X_1$, $X_2$, $\ldots$, $X_n$, where each trial succeeds with a probability $p$.
Let $X$ be the random variable denoting the number of successes obtained. Alternatively, if we consider the success of a Bernoulli trial to be expressed as the value $1$ and its failure as the value $0$, we can define $X$ as the sum of the random variables in the sequence: $X = \sum_{i=0}^{n} X_i$.
We know that there are formulae for calculating $Var[X]$ from $E[X]$, $Var[X] = E[(X-E[X])^2] = E[X^2] - (E[X])^2$.
We are given $E[X] = np$ and $Var[X]= np(1-p)$.
Can someone please show how to derive $Var[X]= np(1-p)$?
Attempted $E[X^2]$ = $np^2$, then $E^2[X]=(?)^2$ .
It is more handsome to go for the variance directly (so without calculation of $\mathbb EX$).
Apply:$$\mathsf{Var}(X)=\mathsf{Covar}(X,X)=\sum_{i=1}^n\sum_{j=1}^n\mathsf{Covar}(X_i,X_j)=\sum_{i=1}^n\mathsf{Covar}(X_i,X_i)=n\mathsf{Covar}(X_1,X_1)=n\mathsf{Var}(X_1)$$where the second equality rests on bilinearity of covariance, the third on independence and the fourth on symmetry.