Trying to understand statistics/hypothesis testing. The example in the book discusses using a Z-Test. I am familiar with and on an intuitive level, I understand a Z-score, a Z-score basically measures how many standard deviations from the mean a point in a sample space is. This makes sense to me.
What I do not understand is why for a Z-test, we seemingly take the Z-score and divide it by $\sqrt{n}$ ?
Can anyone explain the difference, it seems like we are moving away from the intuitive explanation of "number of standard deviations from the mean" which is how we measure the probability via the area under the curve
The key here is that for a sample of $n$ variables $X_i$ with mean $E(X_i)=\mu$ and $\operatorname{Var}(X_i)=\sigma^2,$ the sample mean $\bar X$ has mean $\mu$ and variance $\frac{\sigma^2}{n}:$ $$ \operatorname{Var}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) = \frac{1}{n^2}\sum_i \operatorname{Var}(X_i) = \frac{n\sigma^2}{n} = \frac{\sigma^2}{n}$$ Intuitively as you have more and more observations, the sample mean gets closer and closer to the true mean. So the standard error (i.e. the standard deviation of the sample mean) is $\frac{\sigma}{\sqrt{n}}.$