MLE of uniform distibution again

101 Views Asked by At

I've struggled for hours with a seemingly simple problem, I'm supposed to compute the MLE for $\theta$.

We have $(y_1, y_2...y_n)$ obervations with a uniform distribution.

The density function is as follows:

$$ f(y|\theta)=\left\{ \begin{array}{lcl} \frac{1}{2\theta +1} &,& 0 \leq y \leq 2 \theta +1 \\ 0 &, & \mbox{otherwise} \end{array} \right. $$

Anyway, I guess the likelihood function must be

$$\frac{1}{(2\theta +1)^n}=(2\theta +1)^{-n}=(e^{ln(2\theta + 1)})^{-n}=e^{-n*ln(2\theta + 1)}$$

since none of the numbers are indexed, we get no summations.

If we take the loglikelihood we're left with

$$-n*ln(2\theta + 1)$$

If we take the derivative of this, that should equal:

$$\frac{-n}{(2\theta + 1)}*2=\frac{-2n}{(2\theta + 1)}$$

I multiply with two since that's the inner derivative.

$$\frac{-2n}{(2\theta + 1)}=0$$

At this point I'm really uncertain. I would want another term, so I could isolate $\theta$ on one side.

My book tells me I should add $Y_n$ in here somewhere, though I can't really follow their reasoning. They want $Y_n$ to equal $\hat \theta$. Since both $\theta$ and $Y_n$ is in the answer, I guess that can't be right.

I try to set $Y_n$ as the answer to the expression (feeling I'm walkin om a limb here).

If

$$\frac{-2n}{(2\theta + 1)}=Y_n$$

then:

$$\frac{-2n}{Y_n}=2\theta + 1$$

and

$$\frac{-n}{Y_n}-\frac12=\theta$$

This is (surprise!) not correct, the correct answer is that

$$\hat \theta=\frac12(Y_n-1)$$

Please help me sort this out.

/Magnus

1

There are 1 best solutions below

0
On

When the parameter you are trying to find the MLE for is in the constraint describing the support for your pdf, this ``no information from the derivative set equal to $0$" will almost always happen. When it does, realize that even though the derivative failed you, your goal is still the same-- to maximize the likelihood. So, look at it. Your likelihood is $$ L(\theta) = \frac{1}{(2 \theta + 1)^{n}}. $$ It is easy to see that it is a decreasing function of $\theta$. So, to make it as large as possible, you need to take $\theta$ as small as possible. Note that all of your $Y$'s must be less than or equal to $2 \theta + 1$. Alternatively, $2 \theta + 1$ must be above all of the $Y$'s. In particular, the smallest possible value for $2\theta + 1 $ is the maximum $Y_{(n)}$.

So, set $2 \theta + 1 = Y_{(n)}$ to get the MLE estimator $$ \hat{\theta} = \frac{1}{2} (Y_{(n)}-1). $$