If the mapping $\theta$ to $g(\theta)$ is one-to-one, then $g(\widehat\theta)$ is the MLE of $g(\theta)$.
I didn't understand here. If $g(\theta)$ is monotonically decreasing function, then we are minimizing $\theta$.
If the mapping $\theta$ to $g(\theta)$ is one-to-one, then $g(\widehat\theta)$ is the MLE of $g(\theta)$.
I didn't understand here. If $g(\theta)$ is monotonically decreasing function, then we are minimizing $\theta$.
Copyright © 2021 JogjaFile Inc.
The new parameter space is $\Gamma$, the image of $\Omega$ under the transformation g. We are interested in finding the MLE of $\psi=g(\theta)$. The likelihood function can be written
$$\begin{split}f(x|\theta)&=f(x|g^{-1}(g(\theta)))\\ &=f(x|g^{-1}(\psi))\end{split}$$
We know that the likelihood is maximized at the MLE of theta, $\hat \theta$. So we have
$$\begin{split}f(x|g^{-1}(\hat\psi))&=f(x|\hat\theta)\\ g^{-1}(\hat\psi)&=\hat\theta\\ \hat\psi&=g(\hat\theta)\end{split}$$
Therefore the MLE of $\psi=g(\theta)$ is $g(\hat\theta)$. Note that g being monotonically decreasing has nothing to do with the likelihood function; the fact that g is one-to-one allows us to take its inverse.