I have the following problem in my problem book:
Let $X \sim N(\mu, \sigma^2)$ where both $\mu$ and $\sigma^2$ are unknown. I have to find a confidence interval for the mean.
What I have so far:
I know that when $\sigma^2$ is unknown I can use t-distribution for finding a confidence interval, i.e.: $$\bar x \pm t_{n-1}^*\frac{s}{\sqrt{n}},$$
where $t_{n-1}^*$ is a t-distribution with $n-1$ degrees of freedom and $\bar x$ is the sample mean from a sample of size $n$.
The whole problem seems kind of vague to me. Is this enough for describing the confidence interval? I don't know the sample mean $\bar x$ so is it okay to describe the solution in such a way?
When the standard deviation is unknown but the distribution is Gaussian, the trick is to use the fact that the reduced variable
$$\bar t:=\frac{\bar x-\mu}{\sqrt{\dfrac{s^2}n}}$$ where $s^2$ is the corrected variance follows a Student's distribution (of $n-1$ dof).
Knowing the probability of this variable to be in a certain range, you have your confidence interval for $\mu$.