Asymptotic test using moment estimators and Gaussian approximation of sample that distribute $U[0,\theta]$

123 Views Asked by At

Consider $(X_1, ... , X_n)$ an n-sample of a uniform distribution $U[0; \theta]$, where $\theta > 0$ is an unknown parameter and $\theta_0 > 0$. One wishes to test $H_0:\theta=\theta_0$ against $H_1:\theta \neq \theta_0$.

I need to propose an asymptotic test of level $\alpha$ using the estimate $\widehat{\theta}_n =2\bar{X}$ and a Gaussian approximation.

From Central Limit Theorem.

\begin{equation*} \sqrt{n}\left(2\bar{X} - \theta\right)\xrightarrow[n\rightarrow\infty]{(l)}\mathcal{N}\left(0, \dfrac{\theta^2}{3} \right) \end{equation*} And in the case of maximun likelihood we have

$$\sqrt{n}(\tilde{\theta}_n-\theta)\xrightarrow[n\rightarrow\infty]{(l)} 0$$

see this question Weak Convergence with Uniform Distribution $U[0;\theta]$ and Method of Moments.

We know that

Theorem(Wilks): Consider $(E,\mathcal{E},\mathbb{P}_{\theta},\theta \in \Theta), > \Theta\in\mathbb{R}^p$ a parametric regular model dominated by some probaility measure $\mu$, with likelihood $\mathcal{L}(x,\theta)$. Consider also the test problem $H_0:\theta=\theta_0$ against $H_1:\theta\neq\theta_0$. Then with similar notatio $$\Lambda_n=\frac{\mathcal{L}(X_1,...,X_n,\theta_0)}{\sup_{\theta\in\Theta}\mathcal{L}(X_1,...,X_n,\theta)}$$ One has that $$-2\ln\Lambda_n\xrightarrow[n\rightarrow\infty]{(l)}\chi^2(p)$$

The rejection region of this asymptotic test is then $W=\lbrace > -\ln\Lambda_n > q_{1-\alpha}\rbrace$ where $q_{1-\alpha}$ is the $1-\alpha$ quatile of the $\chi^2(p)$ distribution

In this case we have $H_0:\theta= \dfrac{\theta^2}{3}$ against $H_1:\theta \neq \dfrac{\theta^2}{3}$.

Is correct the assumption above?

How can apply Theorem(Wilks) if $\sqrt{n}(\tilde{\theta}_n-\theta)\xrightarrow[n\rightarrow\infty]{(l)} 0$?

1

There are 1 best solutions below

4
On

The MME $T = 2\bar X$ is not a good estimator of $\theta$ for random observations $X_1, X_2, \dots, X_n$ from $\mathsf{Unif}(0, \theta).$ Nevertheless, it can be used to test $H_0: \theta = \theta_0$ against $H_a: \theta \ne \theta_0,$ at level 5%---even if not with optimal power.

The null distribution of the test statistic $Z = \frac{T - \theta_0}{\theta_0/\sqrt{3n}}$ is very nearly standard normal for $n$ larger than about 10. Thus for a 5% level test, one rejects for $|Z| > 1.96.$

The only possibly contentious point is the near normality of $Z$. The brief simulation below illustrates approximate normality for $n = 12.$

m = 10^6;  n = 12;  th = 10;  x = runif(n*m, 0, th)
DTA = matrix(x, nrow=m) # each row a sample of size n
a = rowMeans(DTA);  t=2*a;  z = (t - th)/(th/sqrt(3*n))
hist(z, prob=T, col="skyblue2", ylim=c(0,.4))
curve(dnorm(x), lwd=2, col="red", add=T)
shapiro.test(z[1:2000])  # test the first 2000 values of z for normality

        Shapiro-Wilk normality test

data:  z[1:2000] 
W = 0.9995, p-value = 0.8788

The histogram below shows a million values of $Z$ (based on $n = 12$ and $\theta = 10$) along with the standard normal CDF

enter image description here

Notes: (a) The CLT converges rapidly for IID uniform observations. For some years, $Z = U_1 + U_2 + \cdots + U_{12}-6,$ where $U_i \stackrel{iid}{\sim} \mathsf{Unif}(0,1),$ was used to simulate standard normal random variables, mainly with satisfactory results. (Better methods are available now.) (b) The MLE is a far better estimator of $\theta$ and it has a generalized beta distribution. (c) Because you are not using the MLE, discussion of it is irrelevant to the problem you ask.