Full question:
The average rainfall of Charlesville in August is normally distributed with mean 68 mm and standard deviation 8 mm. Over a 40 year period, how many times would you expect there to be less than 52 mm of rainfall during August in Charlesville?
I drew a graph of the normal distribution for this data set. Then, finding that $<52$ mm of rain would likely constitute about 2.28% of the data (between $μ-3σ$ and $μ-2σ$), I multiplied $0.0228$ by $40$ to get $0.912$. The textbook answer says that one instance of $<52$ mm of rainfall could be expected over the $40$ year period. Have they just rounded $0.912$ to the nearest whole number, or are they going about it an entirely different way?
It seems as if you are trying to use the Empirical Rule to solve this problem. According to the ER, about 95% of the probability under a normal curve lies between $\mu \pm 2\sigma.$ Then by symmetry about 2.5% of observations lie below $\mu -2\sigma,$ which is $68 - 2(8) = 52$ for your rainfall distribution. So over a 40-year period, one might expect to see $40(.025) = 1$ year with rainfall that low.
If you know how to use printed tables of the standard normal CDF or to use statistical software, you might get a result that is a bit more accurate:
$$P(X < 52) = P\left(Z = \frac{X - \mu}{\sigma} < \frac{52 - 68}{8}\right) = \cdots,$$
where $Z$ is standard normal, and the procedure of subtracting the mean and then dividing by the standard deviation is called 'standardization'.
In R statistical software standardization (and rounding to use tables) can be avoided:
Notes: (1) I share @Did's doubts that $3\sigma$ has anything to do with fining your answer. (2) Also, with your guess that the answerbook is rounding to the nearest integer. (3) This is a fine drill problem, but it has to be an approximation that rainfall is normally distributed. If there is a discrepancy between the model and reality it is likely to be in the far 'tails' of the distribution, which is where your problem is focused.