Asymptotic normality of estimator of uniform's distribution parameter

838 Views Asked by At

We have $X_1, ..., X_n \sim U[0, \theta]$ and estimator $\phi^*(X_{[n]}) = X_{(n)}$. $X_{(n)}$ here stands for $\max_iX_i$.

I need to made this estimator unbiased and check if it is asymptotic normal.

Unbiasing is easy: we need to find expectation of $\phi$. So $\mathbb{E}\phi = \frac{n}{n+1}\theta$. Bias is $b(\phi^*, \theta) = \mathbb{E}X_{(n)} - \theta = -\frac{\theta}{n + 1}$. So, unbiased estimator is $\tilde{\phi^*}$ = $\frac{n+1}{n}X_{(n)}$.

And now I need to check whether this unbiased estimator is asymptotic normal, i.e. $\sqrt{n}(\frac{n+1}{n}X_{(n)} - \theta) \to \mathcal{N}(0, \sigma^2(\theta) )$. How can I do that? Do I need to use central limit theorem?

2

There are 2 best solutions below

6
On BEST ANSWER

Given $X_i$ are iid, it would be easy enough to obtain the CDF of the standardized estimator (we assume it is $n^\alpha$-consistent):

$$P\left(n^\alpha \left(\frac{n+1}{n}X_{(n)}-\theta \right)\leq x\right)=P\left(X_{(n)}\leq \frac{n}{n+1}\left(\frac{x}{n^\alpha}+\theta\right)\right)\\ =\frac{1}{\theta^n}\left(\frac{n}{n+1}\left(\frac{x}{n^\alpha}+\theta\right)\right)^n\\ =\left(\left(1+\frac{1}{n}\right)^n\right)^{-1}\left(1+\frac{x}{\theta n^\alpha }\right)^n,$$

and we see when $\alpha=1$ then this converges to a nondegenerate CDF of

$$F(x)=e^{x/\theta-1}$$

on the support $(-\infty,\theta],$ which is not normal.


If the aim is to just assess whether we have asymptotic normality, we can determine this up front just by looking at the support. Since $X_{(n)}\in [0,\theta]$,

$$T_n=n^\alpha \left(\frac{n+1}{n}X_{(n)}-\theta \right)\in \left[-\theta n^\alpha,\frac{1}{n^{1-\alpha}}\theta\right],$$

so that $T_n$ cannot be asymptotically normal for $\alpha\leq 1$.

8
On

First of all, your approach to this problem is flawed. To check if a sequence $Y_1,Y_2,\dots,$ of variables is asymptotically normal, you need to show that $$\frac{Y_n-E[Y_n]}{\sqrt{\text{Var}(Y_n)}}\to N(0,1).$$ In your case, $Y_n=\frac{n+1}{n}X_{(n)}$, and $E[Y_n]=\theta$. Since you are multiplying $(Y_n-E[Y_n])$ by $\sqrt{n}$, you seem to be assuming that $\text{Var}(Y_n)\sim 1/n$, but you have not proved this. As a first step, you should compute $\text{Var}(\frac{n+1}{n}X_{(n)})$, and then plug that into the expression above and try to work out what the limit is.

To compute $\text{Var}(\frac{n+1}{n} X_{(n)})$, we first find $$ E[X_{(n)}^2]=\int_0^\theta 2t P(X_{(n)}>t)\,dt=\int_0^\theta 2t(1-(t/\theta)^n)=\theta^2\left(1-\frac{2}{n+2}\right) $$ which further implies $$ \text{Var}(X_{(n)})=E[X_{(n)}^2]-E[X_{(n)}]^2=\theta^2(1-\tfrac2{n+2})-\theta^2(1-\tfrac1{n+1})^2=\frac{\theta^2\cdot n}{(n+1)^2(n+2)} $$ and finally that $\text{Var}(\frac{n+1}nX_{(n)})=\frac{\theta^2}{n(n+2)}$. The important thing is that the variance goes to zero at a rate of $1/n^2$, or that the standard deviation decreases like $1/n$, so you need to multiply by $n$ to compensate so that a nontrivial limiting distribution exists. Therefore, we need to look at $$ P\left(n\cdot\left(\frac{n+1}nX_{(n)}-\theta\right)\le t\right), $$ and see if the limit is equal to $P(W\le t)$ for some normal variable $W$. But now we have reduced the problem to Golden_Ratio's answer with $\alpha=1$. Computing the same limit they did, we see $$ P\left(n\cdot\left(\frac{n+1}nX_{(n)}-\theta\right)\le t\right)=\left(\left(1+\frac{1}{n}\right)^n\right)^{-1}\left(1+\frac{t}{\theta n}\right)^n \longrightarrow e^{t/\theta-1} $$ Since $e^{t/\theta-1}$ is not the distribution of a normal variable, we conclude that the estimator is not asymptotically normal. Instead, the distribution approaches $1-Z$, where $Z$ is exponential with parameter $\theta$ (or $1/\theta$, I can never remember which is which).