Why do we use a $z$-test rather than a $t$-test when estimating an appropriate sample size?

870 Views Asked by At

I'm kinda puzzled on one point.

In our stat class, we are taught to use the Student $t$ distribution to find confidence intervals for normally distributed data, as blindly using the normal distribution with the measured variance and mean gives a too-narrow confidence interval.

However, when we are doing a problem regarding "how big should the samples size be for the standard error to be blah blah tiny", we are told to use the Normal distribution. Why is this so? Is it only because the normal is close enough and you don't need to worry about the degrees of freedom when solving the equation (which is numerically solved anyway) or is there some deeper reason?

1

There are 1 best solutions below

0
On BEST ANSWER

A general rule of thumb is:

If the sample size $n$ is large (for example, larger than $30$), then by the Central Limit Theorem, a $z$-test is appropriate. Otherwise, for small $n$ (for example, smaller than $30$), a $t$-test is appropriate.

Now suppose we want to go backwards. Instead of figuring out which type of test to use based on the sample size, we want to know how big the sample size should be based on how accurate we want our confidence interval to be (so that the standard error is within a certain threshold). Generally speaking, we want our standard error to be small. Hence, the sample size should be large; large enough so that the Central Limit Theorem applies so that the Student $t$ distribution is no longer necessary and we can use the Normal distribution.