The difference between a Z-Score and a Z-statistic? Why do we divide by $\sqrt{n}$ for the latter?

4k Views Asked by At

Trying to understand statistics/hypothesis testing. The example in the book discusses using a Z-Test. I am familiar with and on an intuitive level, I understand a Z-score, a Z-score basically measures how many standard deviations from the mean a point in a sample space is. This makes sense to me.

What I do not understand is why for a Z-test, we seemingly take the Z-score and divide it by $\sqrt{n}$ ?

Can anyone explain the difference, it seems like we are moving away from the intuitive explanation of "number of standard deviations from the mean" which is how we measure the probability via the area under the curve

2

There are 2 best solutions below

4
On

The key here is that for a sample of $n$ variables $X_i$ with mean $E(X_i)=\mu$ and $\operatorname{Var}(X_i)=\sigma^2,$ the sample mean $\bar X$ has mean $\mu$ and variance $\frac{\sigma^2}{n}:$ $$ \operatorname{Var}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) = \frac{1}{n^2}\sum_i \operatorname{Var}(X_i) = \frac{n\sigma^2}{n} = \frac{\sigma^2}{n}$$ Intuitively as you have more and more observations, the sample mean gets closer and closer to the true mean. So the standard error (i.e. the standard deviation of the sample mean) is $\frac{\sigma}{\sqrt{n}}.$

0
On

A z-score is what you get when you subtract its expected value from a random variable and then divide by its standard deviation. Thus if $\operatorname E(X) = \mu$ and $\operatorname{s.d.}(X) = \sigma,$ then the z-score is $\dfrac{X-\mu} \sigma.$

If $X_1,\ldots,X_n$ all have expected value $\mu$ and standard deviation $\sigma$ and the covariance between any two of them is $0,$ then $\overline X = (X_1+\cdots+X_n)/n$ has expected value $\mu$ and standard deviation $\sigma/\sqrt n.$ Therefore its z-score is $\dfrac{\overline X - \mu}{\sigma/\sqrt n}.$