Why is the maximum likelihood estimator $y_{\operatorname{max}}$ instead of $y_{\operatorname{min}}$

310 Views Asked by At

Use the method of maximum likelihood to estimate the parameter $\theta$ in the uniform pdf,

$f_y(y ;\theta)=\frac{1}\theta$ where $0\leq y\leq\theta$

According to the solution manual $\theta_e=y_{\operatorname{max}}$, however the likelihood function $L(\theta)=(\frac{1}\theta)^n,$

For me, what would maximize $L(\theta)$ would be $\theta_e=y_{\operatorname{min}}$

2

There are 2 best solutions below

0
On BEST ANSWER

Well if you think about it your answer does not make much sense!

Maybe better to see it as follows:

Let $\hat \theta$ be the MLE, then the likelihood of all points $y_i$ that would be greater than $\hat\theta$ would be zero according to that model, and the entire likelihood would be zero. Therefore, in order to maximize the likelihood, $\hat\theta = y_{\text{max}}$.

0
On

If $\theta$ is less than any of your data points then the value of your product likelihood will be $0$, because: $$f_y(y,\theta) = \left\{\begin{array}{ll} \frac{1}{\theta} & \mathrm{\ for\ } 0 \le y \le \theta \\ 0 & \mathrm{\ otherwise.} \end{array}\right.$$ So, for a given value of $\theta$, the product likelihood will be $\theta^{-m} 0^{N-m}$ where $N$ is the number of data points and $m$ is the number of data points, $y_i$ that satisfy $y_i \le \theta$.