Let $X_{1}, ..., X_{n}$ represent a random sample from $U(0, \theta)$
How to find the MLE of $\theta^2$ and does this estimator unbiased? Does the MLE satisfy the regularity conditions? Really confused on this question.
Let $X_{1}, ..., X_{n}$ represent a random sample from $U(0, \theta)$
How to find the MLE of $\theta^2$ and does this estimator unbiased? Does the MLE satisfy the regularity conditions? Really confused on this question.
On
First calculate the likelihood
$$L(\theta)=\frac{1}{\theta^n}\cdot \mathbb{1}_{(X_{(n)};\infty)}(\theta)$$
Where $X_{(n)}$ is the maximum of $X_i$.
It is self evident that $L$ is stricly decreasing in $\theta$ and as per the fact that $X_{(n)}$ is not included in the domain $L$ has not a maximum.
But the correct definition of ML Estimator is the argsup, not the argmax...so $\hat{\theta}$ can exist and not belonging to $\theta$ domain. It must belong to its euclidean closure.
Concluding, $\hat{\theta}_{ML}=X_{(n)}$ and using invariance property of ML Estimators:
$$\widehat{\theta^2}=(X_{(n)})^2$$
I'll give you some pointers to solve the question. First you need to calculate the likelihood function L. For the uniform distribution in your case you should obtain:
$L(\theta) = \frac{1}{\theta^{n}}$ for $x_{1},\, x_{2},\, \dots,\, x_{n} \in (0,\, \theta)$
and $0$ elsewhere.
Consider the loglikelihood where the likelihood is not zero:
$\log(L(\theta)) = \frac{-n}{\theta}$ for points where $L(\theta)$ is non-zero.
Now notice that this term is always negative, so we conclude that the loglikelihood is decreasing. This implies that smallest possible value for $\theta$ will be the MLE, since any value higher than that must have a lower likelihood, since the loglikelihood (and thus the likelihood) is decreasing.
Now, the lowest value possible for $\theta$ is actually the highest value you observed in your sample. (It couldn't be lower than the highest observed value in the sample, since than the highest observed value would not lie in the interval $(0, \theta)$.)
Now notice that the function which squares positive numbers is an injective function and thus a bijection on an appropriate image. Then you can use that the general property that if $A$ is an MLE for $a$, then $g(A)$ is an MLE for $g(a)$, where g is a bijection.
For regularity conditions it depends on what they are. You didn't add the conditions in the question, but usually the problem with the uniform distribution and regularity is that the domain on which the PDF of the uniform distribution is defined, depends on $\theta$.