Maximum likelihood estimator of a product of non-negative functions

494 Views Asked by At

Suppose that $a(\cdot)$ and $b(\cdot)$ are two non-negative functions such that $$f(x;\theta)=a(\theta)b(x)$$ is a probability density function for each $\theta > 0$. Find the maximum likelihood estimator of $\theta$.

My try: Our likelihood function is given by $$L(\theta) = \prod_{i=1}^n a(\theta)b(x_i) = a(\theta)^nb(x)^n$$ The log likelihood function is given by $$\ln L(\theta) = n\ln a(\theta) + n \ln b(x)$$ Equating it to zero we get $$\ln a (\theta) = - \ln b(x)$$ which obviously leads to nowhere.

Moreover, the question itself seems weird to me. I am used to the form of "Given a random sample $X_1,...,X_n$ of size $n$ (...)", since this is missing now, does this imply that I cannot use the usual method I demonstrated above?

Lastly, if you want to, could you check the exercise below for errors?

Let $X_1,...,X_n$ denote a random sample from $$f(x;\theta) = f_\theta (x) = \theta f_1(x) + (1-\theta)f_0 (x)$$ where $0 \leq \theta \leq 1$ and $f_0(\cdot)$ and $f_1(\cdot)$ are known densities, estimate $\theta$ by the method of moments.

Answer: First, we need to write $E[X]$ in a better form: \begin{align*} E[x] &= \int_{-\infty}^\infty x\cdot f(x;\theta)dx = \int_{-\infty}^\infty x(\theta f_1(x) + (1-\theta)f_0 (x))dx \\ &= \theta \int_{-\infty}^\infty x\cdot f_1 (x)dx + (1-\theta)\int_{-\infty}^\infty x\cdot f_0 (x)dx \\ &= \theta \int x(f_1-f_0)dx + \int x f_0 dx \\ &=\theta \left(E_1\left[x\right] - E_0\left[x\right]\right) + E_0\left[x\right] \end{align*} Equating this to the first sample moment ($m_1'$) we get: \begin{align*} m_1'= \theta \left(E_1\left[x\right] - E_0\left[x\right]\right) + E_0\left[x\right] \end{align*} which is equivalent to \begin{align*} \theta = \dfrac{m_1' - E_0[x]}{E_1[x] - E_0[x]} \end{align*} Hence, our method of moments estimator for $\theta$ is given by: $$\hat{\theta} = \dfrac{m_1' - E_0[x]}{E_1[x] - E_0[x]}$$

Both are questions from "Introduction to the theory of statistics" by Mood, Graybill and Boes.

Thanks in advance!

1

There are 1 best solutions below

2
On BEST ANSWER

Another example where including the (crucial) indicator functions in the densities simplifies everything... Here the PDF is $$ f(x;\theta)=a(\theta)b(x)\mathbf 1_{[0,\theta]}(x), $$ where $$ \frac1{a(\theta)}=\int_0^\theta b(x)\,\mathrm dx, $$ hence the likelihood of a sample $\mathbf x=(x_k)$ is $$ L(\mathbf x,\theta)=\prod_kf(x_k;\theta)=a(\theta)^n\,\mathbf 1_{\theta\geqslant m(\mathbf x)}\,\prod_kb(x_k), $$ where $$ m(\mathbf x)=\max_kx_k. $$ The last product does not depend on $\theta$ hence one can forget it. The indicator function shows that $L(\mathbf x,\theta)$ can be nonzero only when $\theta\geqslant m(\mathbf x)$. And $\theta\mapsto a(\theta)$ is nonincreasing hence one looks for $\theta$ as small as possible. Finally, $L(\mathbf x,\theta)$ is maximal when $\theta=\hat\theta(\mathbf x)$ with $$ \hat\theta(\mathbf x)=m(\mathbf x). $$