generalised likelihood ratio test

210 Views Asked by At

Given the p.m.f of a negative binomial random variable X is given by $$P(X=x|\theta)={r+x-1 \choose x}(1-\theta)^x\theta^r, x=0,1,2... $$ with parameters $\theta\in[0,1]$ and $r>0$ consider the maximum likelihood estimator $$\hat{\theta}=\frac{r}{r+x}$$

State the form of a generalised likelihood ratio test based on $\hat{\theta}$ for the hypothesis testing problem:

$$\begin{cases} \mathcal{H}_0, & \theta=0.5 \\ \mathcal{H}_1, & \theta\ne 0.5 \end{cases}$$

What I know:

I know I need to compute the likelihood ratio but because there are no values I'm confused as to how I do this. The likelihood function is $$L(\theta)= {r+x-1 \choose x}(1-\theta)^x(\theta)^r$$

Then does the ratio become : $\frac{{r+x-1 \choose x}(1-0.5)^x(0.5)^r}{{r+x-1 \choose x}(1-\theta\neq0.5)^x(\theta\neq0.5)^r}$

1

There are 1 best solutions below

7
On

No, it's wrong.

The definition of generalized likelihood ratio is the following

$$\lambda(\mathbf{x})=\frac{sup_{\theta=\theta_0}L(\theta|\mathbf{x})}{sup_{\theta \in\Theta}L(\theta|\mathbf{x})}$$

Thus in the denominator of your ratio you have $\theta=\hat{\theta}_{ML}$

Substituting you get

$$\lambda(\mathbf{x})=\frac{sup_{\theta=\theta_0}L(\theta|\mathbf{x})}{sup_{\theta \in\Theta}L(\theta|\mathbf{x})}=\frac{\Big(\frac{x+r}{2}\Big)^{x+r}}{r^r x^x}$$

The exercise ends here because no other questions are asked. Going on,

$$0\leq \lambda(\mathbf{x})\leq 1$$

And you will reject $H_0$ when $ \lambda(\mathbf{x})\leq c$ for a certain $c$


In Bayesian Statistics (just FYK; it is not your case because you are dealing with Classical Statistics), under certain assumptions, the ratio you posted is not completely wrong....it becomes

$$\frac{0.5^{x+r}}{\int_0^1 \theta^{r}(1-\theta)^x d\theta}$$