I'm kinda puzzled on one point.
In our stat class, we are taught to use the Student $t$ distribution to find confidence intervals for normally distributed data, as blindly using the normal distribution with the measured variance and mean gives a too-narrow confidence interval.
However, when we are doing a problem regarding "how big should the samples size be for the standard error to be blah blah tiny", we are told to use the Normal distribution. Why is this so? Is it only because the normal is close enough and you don't need to worry about the degrees of freedom when solving the equation (which is numerically solved anyway) or is there some deeper reason?
A general rule of thumb is:
Now suppose we want to go backwards. Instead of figuring out which type of test to use based on the sample size, we want to know how big the sample size should be based on how accurate we want our confidence interval to be (so that the standard error is within a certain threshold). Generally speaking, we want our standard error to be small. Hence, the sample size should be large; large enough so that the Central Limit Theorem applies so that the Student $t$ distribution is no longer necessary and we can use the Normal distribution.