I asked myself those two questions the other day, and I have a very limited backgroud in stats, so help would be appreciated!
Sometimes in the middle of my grading, I look at the average of the students I have graded. Of course, it's not the exact average of the whole group but it gives an idea of what the group has done.
This is the part that I tried to quantify. If you grade the exams of a classroom made of $N$ students, and suppose the average of the whole group is $\mu$, and a standard deviation $\sigma$. Suppose also a regular bell curve (to make it easier). I have two questions (the first I think I could end up finding it by going over my old books, but the second seems more complex to me):
- What is the ratio $n/N$ of corrected exams over the total amount of exams that would give you a correct estimate of the average, with let say a 5% accuracy and 95% of being correct?
- How fast does the intermediate average (the one you can compute after each exam graded) converges towards the actual average of the whole group. In other words, if
$$\mu_n=\frac{1}{n}\cdot \Sigma_{k=1}^n G_k$$ and $$f(n)=\frac{|\mu_n-\mu|}{\mu}$$
How fast would $f(n)$ approach $0$? In a exponential, log, or exponent way?
I tried to do some simulations, not enough though to get something. (it's a bit late, and I did it very fast on excel, but I'll try with a quick script later).
Any partial, or complete answer (or even a reference to some article) would be appreciated!
Thanks!
You could read about estimation of parameters of a normal distribution. Both the estimates of the mean and the standard deviation converge as $\frac 1n$, the inverse of the number of samples. This assumes that you are grading the papers in random order. If you grade them in the order turned in, it could well be that the first ones turned in are better than the mean on average.