I know there’s a simple explanation for this but it’s got me stumped. If I take the variance of $nY$, where Y is a random variable, I have $$\operatorname{Var}(nY) = \operatorname{Var}(\underbrace{Y + Y +\cdots+ Y}_{\text{$n$ times}}) = \underbrace{\operatorname{Var}(Y) + \operatorname{Var}(Y) + \cdots + \operatorname{Var}(Y)}_{\text{$n$ times}} = n \operatorname{Var}(Y).$$ But variance properties say $\operatorname{Var}(nY) = n^2\operatorname{Var}(Y)$.
Context is I’m using a Bernoulli variable Y with $E(Y) = p$ and $\operatorname{Var}(Y) = p(1-p)$
Then with $X$ = $n$ trials of $Y$, I’m practicing deriving $E(X) = np$ and $\operatorname{Var}(X) = \operatorname{Var}(nY) = n \operatorname{Var}(Y) = np(1-p).$
“$n$” cannot be squared in this case or it will be incorrect according to all sources. How do I reconcile the derivation of the variance in this case with the general property of variance of a random variable multiplied by a scalar?
Really, if you don’t need the context, my entire question is contained in the first paragraph. Why does it seem I can get two different answers for $\operatorname{Var}(nY)$.
It is not true that $var (X+Y)=Var (X) +var (Y)$. So you cannot write $var (nY)=Var(Y)+Var(Y)+...+Var(Y)$. The correct way to find the variance of $nY$ is to use the definition: $var (nY)=E(nY)^{2}-(E(nY))^{2}=n^{2} (EY^{2})-((nEY)^{2})=n^{2} var (Y)$.