I have a question that I am confused by.
Suppose I have a normally distributed random variable $Y$, with mean $\mu$ and variance $\sigma^2$, and I'm testing a null hypothesis of mean $= \mu_1$ and variance $= \sigma_1^2$ for some values $\mu_1$ and $\sigma_1$.
Then suppose my alternative hypothesis is mean $= \mu_2$ and variance $= \sigma_2^2$, for some values $\mu_2$ and $\sigma_2$. How can I do a hypothesis test to test these null and alternative hypotheses, respectively? Thank you very much in advance!
You have $Y\sim N(\mu,\sigma^2)$ and the null and alternative hypotheses say $(\mu,\sigma) = (\mu_i,\sigma_i^2)$ for $i=1,2$ respectively.
Let $y$ be the observed value of $Y.$ The likelihood function is $$ L(i) = \text{constant} \times \frac 1 {\sigma_i} \exp \left( \frac{-(y-\mu_i)^2}{2\sigma_i^2} \right) \text{ for } i = 1,2. $$
Let $A\sim B$ mean that any change in $y$ that makes $A$ bigger also makes $B$ bigger and vice-versa.
For now assume $\sigma_1>\sigma_2$; then make the appropriate changes if the opposite holds.
The likelihood ratio is \begin{align} \frac{L(1)}{L(2)} & = \frac{\sigma_2 \exp\left( \dfrac{-(y-\mu_1)^2}{2\sigma_1^2} \right)}{\sigma_1 \exp \left( \dfrac{-(y-\mu_1)^2}{2\sigma_1^2} \right)} \\[10pt] & \sim \frac{-(y - \mu_1)^2}{2\sigma_1^2} + \frac{(y - \mu_2)^2}{2\sigma_2^2} \\[10pt] & \sim \sigma_1^2 (y - \mu_2)^2 - \sigma_2^2 (y - \mu_1)^2 \\[10pt] & = (\sigma_1^2 - \sigma_2^2) \Big( y^2 - 2(\sigma_1^2\mu_2 - \sigma_2^2\mu_1)y + \cdots\cdots \Big) \\ & \qquad \text{(Here and below, “$\cdots\cdots$'' means something not depending on $y$.)} \\[10pt] & \sim \left( y - \frac{\sigma_1^2\mu_2 - \sigma_2^2\mu_1}{\sigma_1^2 - \sigma_2^2} \right)^2 \tag 1 \end{align} This one rejects the null hypothesis if the test statistic $(1)$ is two small. How small is too small depends on the level of the test and on the probability distribution of $(1).$ Under the null hypothesis, the test statistic $(1)$ has a non-central chi-square distribution with $1$ degree of freedom. ($(y-\mu_1)$ has a central chi-square distribution; the test statistic's distribution is non-central because the quantity subtracted from $y$ is not the expected value.)
If one observes an i.i.d. sample $Y_1,\ldots,Y_n,$ then the same argument can be made with $(y_1+\cdots+y_n)/n$ in place of $y$ and $\sigma_i^2/n$ in place of $\sigma_i^2.$