As we know, each random variable is responsible for associating some random events to the probability values. These random events belong to the specific population, and that random variable represent that population. In other words, random variables are representatives of their populations. By virtue of this, they present their own distributions.
What I am wondering is that summing adequate number of random variables
According to the resource that I read two days ago, by summing large number of random variables, we can obtain new random variable of normal distribution. Besides, this is called central limit theorem.
Actually, I investigated central limit theorem, and I could not construct a relationship between the theorem itself and summing random variables. The theorem is about the fact that large amount of data samples construct a normal distribution.
Can anyone explain if there is a relationship between summing random variables between central limit theorem ?
The classical central limit theorem states that, given a large sample of independent values $X_n$ from the same finite-$\mu$-and$\sigma$ distribution, $\frac{1}{\sqrt{n}}\sum_{i=1}^n\frac{X_i-\mu}{\sigma}\approx N(0,\,1)$. This approximation is equivalent to the sample mean being $N(\mu,\,\frac{\sigma^2}{n})$, but the former statement is preferred so the distribution converges.
Note the Normal approximation we obtain is of the sample mean (give or take your preferred linear transformation thereof for the discussion), not of the distribution being sampled. One common misconception is that Normal distributions themselves are supposed on this theorem to be prevalent "in the real world".
Note also that if the sampled distribution doesn't have a finite mean and variance, this all falls apart. And in the famous example of a Cauchy distribution, the sample mean actually still has the same Cauchy distribution. (The sample median, on the other hand, is approximately Normal.)