Problem in efficiency of two test statistic for 2-sample case

34 Views Asked by At

I'm currently read a paper about asymptotic relative efficiency named Asymptotic Normality and Efficiency of Certain Nonparametric Test Statistics Herman Chernoff and I. Richard Savage

Here I stuck to understand theorem 3(page - 980), the main issue of the theorem 3 is

Suppose we have $X_1,X_2,...,X_n \sim F(x), i.i.d.$ and $Y_1,....,Y_m \sim G(x)=F(x+\theta), i.i.d., \theta >0$ and $Var(X_1)=Var(Y_1)=\sigma^2$. We have to test (location parameter) whether $H_0 : \theta = 0$ or not. Consider 2-sample t-statistic = $$T_1 = \frac{\bar{X}-\bar{Y}}{\sqrt{\big(\frac{1}{n}+\frac{1}{m}\big)}\hat{\sigma}^2}$$ and another non-parametric test statistic called Vandar-Warden test statistic $$T_2 = \sum_{i=1}^n \Phi^{-1}(\frac{R_i}{N+1})$$

And I want to check asymptotic efficiency of $T_2$ w.r.t. $T_1$

From the paper I get an expression for asymptotic efficiency of $T_2$ w.rt. $T_1$, that is

$$ef(T_2,T_1)= \sigma^2 \bigg(\int_{\mathbb{R}}\frac{f^2(x)}{\phi(\Phi^{-1}(F(x)))}dx \bigg)^2$$

In theorem 3, says that $ef(T_2,T_1) \geq 1$, equallity holds if $f(x)\sim \mathcal{N}(0,\sigma^2)$. In paper this proof is bit difficult in understanding so I tried in a different way using Jenssen inequality. That is
$$ \bigg(\int_{\mathbb{R}}\frac{f^2(x)}{\phi(\Phi^{-1}(F(x)))}dx \bigg)^2 \geq \bigg(\frac{1}{\int \phi(\Phi^{-1}(F(x)))dx}\bigg)^2 $$ I'm stuck here. Any kind of help is appreciable. If anyone explain the proof easily that is fine for me also.

Thanks in advance