I'm trying to understand part of a solution of a hypothesis testing problem. We have $X_1,..., X_n$ independent random observations over a random variable X. We also have two hypothesises :
- $H_0: X \sim N(0,1)$
- $H_1: X \sim N(1,1)$
In the solution there's a statement that:
when $H_0$ is true, $\sum_{i=1}^n X_i \sim N(0,n)$,
and when $H_1$ is true, $\sum_{i=1}^n X_i \sim N(n,n)$.
My question is: Is that true, and if it is how and why it is true?
It is true.
Suppose that $X_1,\ldots,X_n$ are identically and independently distributed $N(a,b)$ random variables. Then $$ \operatorname E[X_1+\ldots+X_n]=\operatorname EX_1+\ldots+\operatorname EX_n=na $$ using the linearity of the expected value and $$ \operatorname{Var}[X_1+\ldots+X_n]=\operatorname{Var}X_1+\ldots+\operatorname{Var}X_n=nb $$ using the independence. So now we have the expected value and the variance of the sum. However, we still need to show that the distribution is normal. This is done here using several different methods.