How large a random sample should be taken from a normal distribution in order for the probability to be at least $0.99$ that the sample mean will be within one standard deviation of the mean of the distribution?
What I have initially done which I think is not valid:
Let $\bar X=\sum^n_{i=1}X_i$ for $X_i\sim N(\mu, \sigma^2)$ from our sample.
By the Central Limit theorem $\frac{S_n-n\mu}{\sigma \sqrt{n}} \sim Z$ where $Z$ is the standard normal distribution and $S_n=\sum_iX_i$.
So for large enough $n$ $\bar X=\frac{S_n}n \sim \frac{Z\sigma}{\sqrt{n}}+\mu$
Then approximately $\mathbb P(|\bar X-\mu|<\sigma)=\mathbb P(|\frac{Z\sigma}{\sqrt{n}}|)<\sigma=\mathbb P(|Z|<\sqrt{n})$ and it gives that $\mathbb P(Z<2.58)=0.995$ so $\sqrt{n}>2.58$ will do so $n=9$ but this is surely too small an $n$ for the Central Limit theorem to have been used in the first place.
Is this the correct approach or completely wrong? I feel like from the context of the question the Central Limit theorem should be used somewhere but almost certainly I have not used it correctly.
Any help is appreciated, thanks.