Suppose a random sample of size n is taken from a normal random variable X~N(μ, 1.5). To be 95% confident that the error between X̄ and the unknown population mean μ is at most .85, how large of a sample needs to be taken?
Does this mean that the confidence itnerval is of size .85*2?
I know what the formulas are for a confidence interval for the mean, but I don't know where to start with this one.
$\bar{X}$ is distributed like a $N(\mu, 1.5 / n)$ (the way to think clearly about this is to remember that adding independent normals adds their means and variances, and scaling a random variable by $\lambda$ multiplies its variance by $\lambda^2$).
You can translate everything by $-\mu$.
Therefore, you want to choose $n$ so that the $P( A_n \in [ - .85, .85]) \geq .95$, where $A_n \sim N(0, 1.5/n)$. ($A_n$ is the distribution of $\bar{X}$ when taking $n$ samples.)
From the $68-95-99.7$ rule, you basically want $.85$ to be 2 standard deviations - now, the standard deviation of $A_N$ is $\sqrt{1.5/n}$...
(You can compute it more exactly also.)
Does that help?