convergence in distributon of lower bound uniform distribution - maximum likelihood

110 Views Asked by At

Let $x_1, \cdot, x_n$ be a random sample from $U(\beta_1,\beta_2)$. It can be shown that $\hat{\beta_1}=min(x_1,\cdot,x_n)$ and $\hat{\beta_2}=max(x_1,\cdot,x_n)$.

Now I want to show the following statement (which was given by my professor):

$n(\hat{\beta_1}-\beta_1) \xrightarrow{d} \frac{1}{\beta_2-\beta_1}e^{\frac{x_i}{\beta_2-\beta_1}}$

I am totally stuck on this one. I only notice that the distribution of $\hat{\beta_1}=min(x_1,\cdot,x_n)$ is equal to $1-(1-F(x))^{n}$. I was also thinking about using that $lim_{n\rightarrow \infty} (1-\frac{e}{n})^n = e^{-x}$. I don't know whether I am on the right track..

Also another question: why do we have a convergence rate of $n$ rather than $\sqrt{n}$?

1

There are 1 best solutions below

1
On

Yes you are on the right track. $$P(n(\hat{\beta}_1 - \beta_1) > x) = [P(X_1 > \beta_1 + x/n)]^n = \left(\frac{\beta_2 - (\beta_1 + x/n)}{\beta_2 - \beta_1}\right)^n = \left(1 - \frac{x/(\beta_2 - \beta_1)}{n}\right)^n \to e^{-x/(\beta_2 - \beta_1)}.$$ From here you can obtain the CDF or PDF of $n(\hat{\beta}_1 - \beta_1)$ to identify the limiting distribution.

The "$n$" here shows that the convergence here is much faster than in other settings where you would typically see "$\sqrt{n}$."