Question regarding the Maximum Likelihood Method

46 Views Asked by At

Given a continous r.v. $X$ with density $f(x; \theta)$ and a r.v. $Y=h(X)$ where $h(.)$ is a strictly monotone differentiable function, how can one prove that the ML-Method for a sample $x_1, \ldots, x_N$ delivers the same estimator as using the ML-Method for the transformed datas $h(x_1), \ldots, h(x_N)$.

1

There are 1 best solutions below

0
On BEST ANSWER

This is a sketch, leaving some calculations to you.

Denote the CDFs of $X,Y$ by $F,G$ respectively; denote the PDFs by $f,g$ respectively. Denote $y_i=h(x_i)$.

By the monotonicity, $G=F \circ h^{-1}$, so $g=(f \circ h^{-1}) \cdot (h^{-1})'$*. Since $h$ does not depend on $\theta$, $g_\theta=(f_\theta \circ h^{-1}) \cdot (h^{-1})'$.

Now assume that $\theta^*$ is the MLE of the data vector $x_i$. Calculate $\sum_{i=1}^n \frac{g_\theta(y_i,\theta^*)}{g(y_i,\theta^*)}$. Conclude that it is zero, and then check that this extreme point is indeed a maximum.

* Here I'm abusing notation by composing functions that can't really be composed. The intent is $(f \circ h)(x,\theta) = f(h(x),\theta)$.