MP test construction for shifted exponential distribution

2.5k Views Asked by At

For the pdf $f_{\theta}(x)=e^{-(x-\theta)} , x \ge \theta$, find a most powerful test of size $\alpha$, using Neyman Pearson Lemma to test $\theta=\theta_{0}$ against $\theta=\theta_1(> \theta_0)$, based on a sample of size $n$.

I am facing difficulty as the parameter here is range dependent However, if $X_{(1)}>\theta_1$, then $f_1(x)>\lambda f_0(x)$ if $e^{n(\theta_1- \theta_0)}> \lambda$ would mean rejection of null hypothesis. But how will I make this test a size $\alpha$ test? The ratio is coming to be constant. Please help!

3

There are 3 best solutions below

0
On BEST ANSWER

Joint density of the sample $(X_1,X_2,\ldots,X_n)$ is

$$f_{\theta}(x_1,\ldots,x_n)=\exp\left(-\sum_{i=1}^n(x_i-\theta)\right)\mathbf1_{x_{(1)}>\theta}\quad,\,\theta>0$$

By N-P lemma, a most powerful test of size $\alpha$ for testing $H_0:\theta=\theta_0$ against $H_1:\theta=\theta_1(>\theta_0)$ is given by $$\varphi(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }\lambda(x_1,\ldots,x_n)>k\\0&,\text{ if }\lambda(x_1,\ldots,x_n)<k\end{cases}$$

, where $$\lambda(x_1,\ldots,x_n)=\frac{f_{\theta_1}(x_1,\ldots,x_n)}{f_{\theta_0}(x_1,\ldots,x_n)}$$

and $k(>0)$ is such that $$E_{\theta_0}\varphi(X_1,\ldots,X_n)=\alpha$$

Now,

\begin{align} \lambda(x_1,\ldots,x_n)&=\frac{\exp\left(-\sum_{i=1}^n(x_i-\theta_1)\right)\mathbf1_{x_{(1)}>\theta_1}}{\exp\left(-\sum_{i=1}^n(x_i-\theta_0)\right)\mathbf1_{x_{(1)}>\theta_0}} \\\\&=e^{n(\theta_1-\theta_0)}\frac{\mathbf1_{x_{(1)}>\theta_1}}{\mathbf1_{x_{(1)}>\theta_0}} \\\\&=\begin{cases}e^{n(\theta_1-\theta_0)}&,\text{ if }x_{(1)}>\theta_1\\0&,\text{ if }\theta_0<x_{(1)}\le \theta_1\end{cases} \end{align}

So $\lambda(x_1,\ldots,x_n)$ is a monotone non-decreasing function of $x_{(1)}$, which means

$$\lambda(x_1,\ldots,x_n)\gtrless k \iff x_{(1)}\gtrless c$$, for some $c$ such that $$E_{\theta_0}\varphi(X_1,\ldots,X_n)=\alpha$$

We thus have

$$\varphi(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }x_{(1)}>c\\0&,\text{ if }x_{(1)}<c\end{cases}$$

Again,

\begin{align} E_{\theta_0}\varphi(X_1,\ldots,X_n)&=P_{\theta_0}(X_{(1)}>c) \\&=\left(P_{\theta_0}(X_1>c)\right)^n \\&=e^{n(\theta_0-c)}\quad,\,c>\theta_0 \end{align}

So from the size condition we get $$c=\theta_0-\frac{\ln\alpha}{n}$$

Finally, the test function is

$$\varphi(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }x_{(1)}>\theta_0-\frac{\ln\alpha}{n}\\0&,\text{ if }x_{(1)}<\theta_0-\frac{\ln\alpha}{n}\end{cases}$$

0
On

Comment: This is a tricky problem--pretty much for the reason you mention.

It may help to consider the case $n = 1$ for $\theta_0 = 1,\,\theta_1 = 5.$ Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: \theta = 1$ against $H_a: \theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject. Then it is easy to see that the significance level of the test is $\alpha \approx 0.0025.$

Can you write the LR in this case? When you understand the problem for $n = 1,$ then go on the the general case.

enter image description here

3
On

If $X_{(1)} \in (\theta_0, \theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_{(1)} \ge \theta_1$, then the MP test of size $\alpha$ is: reject $H_0$ if $$ c\le\frac {\exp\{n \theta_1 - \sum x_i \}} {\exp\{n \theta_0 - \sum x_i \} } = \exp\{ n(\theta_1 - \theta_0 \}, $$ which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $\theta_1$, hence using the fact that $X_{(1)} \sim \mathcal{E}xp_{\theta_1}(n)$, the genral form of the MP is $$ \alpha = \mathbb{E}_{\theta_1}I\{X_{(1)} > c \}=\mathbb{P}_{\theta_1}(X_{(1)} > c) = \exp\{n(\theta_1 - c)\}, $$ i.e., the MP is $$ I\{X_{(1)} >\theta_1-\frac{\ln \alpha}{n}\} \, . $$ for $X_{(1)} \ge \theta_1$, and $0$ otherwise.