Say you have an independent random variable like the mass of single buns produced by bakery, and it is normally distributed. If they were sold in packs of 4, why do I have to divide the variance by 4, so that $X_4 - N(\mu, \sigma^2 / 4)$, to find the percentage of packs that have buns that are between x and y?
I understand after this you just use $\frac{x - \mu}{\sigma}$, and I can see the connection with 4 buns that you divide by 4, but I don't see why. It isn't in my textbook, but it was on a past paper.
It depends on the context, but presumably you are noticing that the distribution of the sum of $n$ independent $N(\mu,\sigma^2)$ variables is distributed as $N(n\mu,n\sigma^2)$, while the sample mean is distributed as $N(\mu,\sigma^2/n)$. This follows from the rules:
$$E[X+Y]=E[X]+E[Y] \\ E[cX]=cE[X] \text{ when $c$ is constant } \\ \operatorname{Var}(X+Y)=\operatorname{Var}(X)+\operatorname{Var}(Y) \text{ when $X,Y$ are independent } \\ \operatorname{Var}(cX)=c^2\operatorname{Var}(X) \text{ when $c$ is constant }.$$