$70$ bags of sugar selected at random from a large batch are weighed. The mean weight of the sample is: $\bar x=227\ g$ and the mean of the sample is: $s=7.5\ g$
Calculate a $95\%$ confidence interval for the mean weight of all packs in the batch.
Do I use: ($\mu_{95}=\bar x \pm 1.96s$) or ($\mu_{95}=\bar x \pm 1.96\frac{s}{\sqrt{n}}$) ?
My textbook gives the formula: $\mu_{95}=\bar x \pm 1.96\frac{\sigma}{\sqrt{n}}$
where $\sigma$ is the known standard deviation of sugar bags.
My textbook says: If $\sigma$ the standard deviation of the population is not known, use $s$ the standard deviation of the sample as an approximation.
I take this as meaning: if $\sigma$ is not known then: $\mu_{95} \approx \bar x \pm 1.96\frac{s}{\sqrt{n}}$
But my textbook also says that: the standard deviation of the sampling distribution is given by $\mu_{\bar x}=\frac{\sigma}{\sqrt{n}}$
It's not clear to mean the difference between this $\mu_{\bar x}$ and $s$.
$s$ is the standard deviation of the sample means. Is $\mu_{\bar x}$ not the same thing ?
Does this mean that the standard deviation of the sample means $s=\frac{\sigma}{\sqrt{n}}$ ?
So if $\sigma$ is not known but $s$ is known do I use: ($\mu_{95}=\bar x \pm 1.96s$) or ($\mu_{95}=\bar x \pm 1.96\frac{s}{\sqrt{n}}$) ?
I assume that the standard deviation of the mean of the sample is $s=7.5\,\mathrm{g}$.
A 95% confidence interval for the mean weight of the population of all packs in the batch is
$$\mu_{95}=\bar x \pm 1.96\frac{s}{\sqrt{n}}$$
To be more correct, you should use a t-distributrion in stead of a z-distribution to calculate the confidence interval. So use $t_{n-1,0.05}$ in stead of 1.96, but if $n$ is large the difference will be minimal.
The symbol for the standard deviation of the sampling distribution should be something like $\sigma_{\bar X}$ and not $\mu_{\bar x}$.
We have
$$\sigma_{\bar X}=\frac{\sigma}{\sqrt{n}}.$$
$s$ can be used as an estimation for $\sigma$.