I have a sample mean given by:
$$S_n=\frac{1}{n}\sum_{i=1}^nX_i$$ Where $X_i$ are i.i.d. Gaussian random variable, i.e., each of them has pdf:
$$p(X_i=x_i)=\frac{1}{\sqrt{2\pi\sigma^2}} e^{-(x_i-\mu)^2/2\sigma^2}$$
The parameters $\mu, \sigma^2$ are, of course, mean and variance.
The pdf of $S_n$ is then given by:
$$p(S_n=s)=\sqrt{\frac{n}{2\pi\sigma^2}} e^{-n(s-\mu)^2/2\sigma^2}$$
How do I get this result? It is obvious that sum of Gaussian pdfs is also Gaussian but how does the $n$ goes into the square root and in the power of $e$?
You can use the following Theorem:
If $X_1, X_2, ... , X_n$ are mutually independent normal random variables with means $μ_1, μ_2, ... , μ_n$ and variances $\sigma_1^2,\sigma_2^2,⋯,\sigma_n^2$, then the linear combination:
$$Y=\sum\limits_{i=1}^n c_iX_i$$
follows the normal distribution:
$$ N\left(\sum\limits_{i=1}^n c_i \mu_i,\sum\limits_{i=1}^n c^2_i \sigma^2_i\right) $$
A proof of that theorem is given here and uses moment generating functions.
For your problem this means $c_i = \frac{1}{n}$:
$\sum\limits_{i=1}^n c_i \mu_i = \sum\limits_{i=1}^n \frac{1}{n} \mu = \mu$
and
$\sum\limits_{i=1}^n c^2_i \sigma^2_i = \sum\limits_{i=1}^n \frac{1}{n^2} \sigma^2 = \frac{\sigma^2}{n}$
The resulting normal distribution is then
$$ S_n \sim N\left( \mu, \frac{\sigma^2}{n} \right) $$
This means your distribution for $S_n$ looks like the distribution for $X_i$ except that you have to replace $\sigma^2$ by $\sigma^2 / n$.