For $n \geq 2$, let $X_1,X_2,\ldots,X_n$ be independent samples from $P_{\theta}$, the uniform distribution $U(\theta,\theta +1),\theta \in \mathbb R$

977 Views Asked by At

For $n \geq 2$, let $X_1,X_2,\ldots,X_n$ be independent samples from $P_{\theta}$, the uniform distribution $U(\theta,\theta +1),\theta \in \mathbb R$. Let $X_{(1)},X_{(2)},\ldots,X_{(n)}$ be order statistics of the sample.

(a) Show that $(X_{(1)},X_{(n)})$ is a sufficient statistic for $\theta$.

Thoughts: By factorization theorem, this one is clear.

(b) Is $(X_{(1)},X_{(n)})$ complete?

Thoughts: I tried to use the definition of being complete here, but I don't know how to deal with the two dimensions here, which troubles me.

(c) Find $a_n$ and $b_{\theta}$ such that $a_n (b(\theta)-X_{(n)}) \rightarrow Z$ in distribution, where $Z$ has an exponential distribution with density $f(x)=e^{-x},x>0$.

(d) What is the MLE of $\theta$ given the sample?

Thoughts: I got the likelihood function: $\prod_{i=1}^{n} x_{i} \mathbb 1_{\theta < X_{(1)} \leq X_{(n)} < \theta +1}$ and the MLE is $X_{(n)}-1<\hat \theta <X_{(1)}$. I'm not sure whether this is right.

3

There are 3 best solutions below

0
On

That $(X_{(1)},X_{(n)})$ is not complete follows from the fact that the expected value of $X_{(n)} - X_{(1)}$ does not depend on $\theta$. Thus you can subtract its expected value from it and get a nonzero unbiased estimator of $0$.

And you should say that $X_{(1)},X_{(2)},\ldots,X_{(n)}$ is a sample, not that $X_{(1)},X_{(2)},\ldots,X_{(n)}$ "are samples".

(Maybe I'll add to this later.)

3
On

Regarding the MLE:

First, your likelihood is wrong. It should be $$\mathcal L (\theta \mid \boldsymbol x) = \prod_{i=1}^n \mathbb{1}(\theta \le x_i \le \theta+1) = \mathbb 1(\theta \le x_{(1)}) \mathbb 1 (\theta \ge x_{(n)} - 1).$$ Second, you can see from this expression that $\mathcal L$ is a step function that takes on either the value $0$ or $1$, so any $\hat \theta$ that maximizes the likelihood will satisfy $\mathcal L(\hat\theta \mid \boldsymbol x) = 1$. Consequently, any $\hat \theta$ that simultaneously satisfies $$x_{(n)} - 1 \le \hat \theta \le x_{(1)}$$ will be a maximum likelihood estimator for $\theta$.

So for example, suppose we observe the sample $$\boldsymbol x = (3.2, 2.9, 3.3, 2.7, 3.6).$$ Then $x_{(1)} = 2.7$ and $x_{(n)} = 3.6$ and this is entirely possible since all observations are contained in an interval of length $1$. So an MLE for the parameter that generated this sample would be any $\hat \theta \in [2.6, 2.7]$. Any value in this interval is equally likely to have generated the above sample.

0
On

Regarding (b): Note that $F_{X_{(n)}}(t) = (t-\theta)^n$, for $t$ in $(\theta, \theta+1]$.

Let $Y_n = a_n(b(\theta) - X_{(n)}) $ hence, \begin{align} F_{Y_n}(y) &= P(a_n(b(\theta) - X_{(n)}) \le y)\\ &=P(X_{(n)}> b(\theta)-y/a_n)\\ &=1-P(X_{(n)}< b(\theta)-y/a_n)\\ &=1-F_{X_{(n)}}(b(\theta)-y/a_n)\\ &= 1- (b(\theta)-y/a_n - \theta)^n, \end{align} thus if $(a_n)_{n\in\mathbb{N}} = (n)_{n\in\mathbb{N}}$ and $b(\theta) = \theta +1$, then $$ \lim_{n\to\infty}F_{Y_n}(y) = \lim_{n\to\infty} (1- (1-y/n)^n) = 1 - e^{-y},\,\, y>0 . $$