Wikipedia's article reads:
Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.[2][3] Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.
I interpret this part:
It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases.
So withdrawing from the bag of observations and averaging every $n$ observations. Is this correct ?
What if it involves withdrawing from many different bags $n$ times, and averaging ?
I am assuming the same holds there.
Main Question
But how is this useful in practice, where we cant do that ? No one does any averaging in practice.
Can you explain it in simple terms, rather that with a lot of formulas?