Likelihood ratio test question

415 Views Asked by At

If $X_i$, $i=1,\ldots,n$, are observations from the normal distribution with known variance $\sigma_i^2$, respectively, and $X_i$'s are mutually independent, construct a test for testing that their means are all equal.

My approach: $\lambda=\frac{L(\omega)}{L(\Omega)}$ where $\omega$ comes from the space where all means are equal and hence in likelihood means are replaced $\mu$ and $\sigma_i^2$ stay the same. In, $L(\Omega)$, all means are staying as they were as $\mu_i$ and variances $\sigma_i^2$ . Do, we need to replace means and variances by their maximum likelihood somewhere. I am confused about that.

1

There are 1 best solutions below

2
On

Your approach is a likelihood ratio test, which gives the uniformly highest power by the Neyman-Pearson Lemma. The test statistic is computed as:

$$ T = -2 \left( \max_{\mu \in \mathbb{R}} l(X_{1}, \ldots, X_{n} \ | \ \mu_1= \ldots = \mu_n = \mu) - \max_{\mu_{1}, \ldots, \mu_{n} \in \mathbb{R}} l(X_{1}, \ldots, X_{n} \ | \ \mu_1, \ldots, \mu_{n}) \right) $$

Where $l(\cdot)$ is the log-likelihood function for a normal distribution: $$l(X_{1}, \ldots, X_{n} \ | \ \mu_1, \ldots, \mu_{n}) = -\sum_{i} \frac{(X_{i}-\mu_{i})^{2}}{2 \sigma^{2}_{i}} + (constant \ in \ \mu_{i}\text{'}s)$$ Which is clearly maximized by letting $\mu_{i} = X_{i}$, and the constants cancel across the two likelihoods.

Therefore: $$T = \max_{\mu \in \mathbb{R}} \sum_{i} \frac{(X_{i}-\mu)^{2}}{ \sigma^{2}_{i}}$$ Differentiating with respect to $\mu$ gives the first order condition: $$\sum_{i} \frac{-2(X_{i}-\mu^{*})}{ \sigma^{2}_{i}} = 0$$ $$\Longrightarrow \mu^{*} = \left( \sum_{i} \sigma_{i}^{-2} \right)^{-1}\left( \sum_{i} X_{i}\sigma_{i}^{-2} \right)$$ If all $\sigma^2_{i}$ are the same, then this simplifies to $\mu^{*} = \bar{X}$.

In conclusion, we can compute $\mu^{*}$ with the knowns $X_{i}$ and $\sigma^{2}_{i}$, and then the test statistic $T$: $$T = \sum_{i}\frac{(X_{i}-\mu^{*})^{2}}{ \sigma^{2}_{i}}$$

By Wilk's theorem, the likelihood ratio test statistic in large samples follows a chi-squared with degrees of freedom equal to the difference in dimensionality between the null and alternative hypotheses, which in this case is $n-1$: $$T \sim \chi^{2}_{n-1}$$ In our case, $\chi^{2}_{n-1}$ is actually the exact distribution of $T$, as shown here.

The p-value can then be computed by taking $1$ minus the cdf of a $\chi^{2}_{n-1}$ evaluated at $T$.