My teacher asked me, "How do you know the sample is large enough?30?200?20,000?"
We proved the central limit theorem when things go to infinity, and that is true in theory. But, in the reality, things can never be infinity, how do we have the confidence to say I can use the central limit theorem on the finite sample such as 30/200/20,000 data? Is there any technique when things is finite?
There are quantitative refinements of the central limit theorem. To my knowledge they all need additional hypotheses, because one can cook up examples where the central limit theorem convergence is extremely slow. The most basic I know of uses the third absolute moment (which need not be finite for the CLT to hold). The result is called the Berry-Esseen theorem: https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem
Basically this says that if the third absolute moment is finite, then the convergence rate is on the order of $1/\sqrt{n}$. However, the convergence is slow if $E[|X-m|^3] \gg E[(X-m)^2]^{3/2}.$ In other words, if $X$ is very skewed, the convergence rate is slow. A concrete example of this phenomenon can be seen with binomial approximation: as $p \to 0^+$ or $p \to 1^-$, the Berry-Esseen theorem tells us that we should expect the convergence rate in the central limit theorem to drop off. Quantitatively, for small $p$, it estimates the maximum difference between the CDFs as essentially $\frac{C}{\sqrt{np}}$, where $C<1/2$. When looking at this, notice that if we send $n \to \infty$ while holding $np$ constant, we should not expect convergence from this theorem, which is correct: we should get convergence to a Poisson variable, which is not normal.
With this in mind, I tried a direct calculation of the binomial CDF and the corresponding normal CDF with $n=30$ and $p=0.1$. What I found is that the largest error is right at the mean, where the binomial probability is almost $0.65$ while the normal probability is (of course) $0.5$.