Derive unbiased estimator for $\theta$ when $X_i\sim f(x\mid\theta)=\frac{2x}{\theta^2}\mathbb{1}_{(0,\theta)}(x)$

1.5k Views Asked by At

Exercise: Let $X_1,\ldots,X_n$ be a random sample from the distribution with density $$f(x\mid\theta) = \dfrac{2x}{\theta^2}\mathbb{1}_{(0,\theta)}(x)$$ w.r.t. the Lebesgue measure. Derive an unbiased estimator for $\theta$.

What I've tried: I need to find an estimator $\delta(X_1,\ldots,X_n)$ such that $\operatorname{E}\big[\delta(X_1,\ldots,X_n)\big] = \theta$. I tried the maximum likelihood estimator but that got me nowhere. The log-likelihood would be equal to $\log L = \log\prod\limits_{i = 1}^n\dfrac{2x_i}{\theta^2}\mathbb{1}_{(0,\theta)}(x_i) = \sum\limits_{i= 1}^n2x_i - 2n\log\theta$ (I'm not sure if I can leave the indicator function like this btw). If I now take the derivate w.r.t. $\theta$ and set it to zero I get that $\hat{\theta}_{ML} = 0$, which is not an unbiased estimator.

Question: How do I solve this exercise? In general; what would a efficient approach be to find an unbiased estimator in exercises like this one?

Thanks in advance!

2

There are 2 best solutions below

4
On BEST ANSWER

In general, note that maximum likelihood estimators are not necessarily unbiased estimators.

I'm not familiar with Lebesgue integration, but hopefully using non-measure theoretic tools can help you find this.

First of all, observe that $$\mathbb{E}[X_1]=\dfrac{2}{\theta^2}\int_{0}^{\theta}x^2\text{ d}x=\dfrac{2}{\theta^2}\cdot\dfrac{\theta^3}{3}=\dfrac{2\theta}{3}\text{.}$$ Thus, the estimator $$\hat{\theta}=\dfrac{3}{2n}\sum_{i=1}^{n}X_i$$ is unbiased for $\theta$, since $$\mathbb{E}[\hat{\theta}]=\dfrac{3}{2n}\sum_{i=1}^{n}\mathbb{E}[X_i]=\dfrac{3}{2n}\cdot \dfrac{2\theta}{3}\cdot n = \theta\text{.}$$

5
On

The likelihood function is\begin{align*} L(θ; x_1, \cdots, x_n) &= \frac{2^n}{θ^{2n}} \prod_{k = 1}^n x_k \prod_{k = 1}^n I_{(0, θ)}(x_k)\\ &= \frac{2^n}{θ^{2n}} \prod_{k = 1}^n x_k I_{(0, +\infty)}\left( \min_{1 \leqslant k \leqslant n} x_k \right) I_{(-\infty, θ)}\left( \max_{1 \leqslant k \leqslant n} x_k \right). \end{align*} For fixed $x_1, \cdots, x_n$, $L(θ; x_1, \cdots, x_n) = 0$ for $θ < \max\limits_{1 \leqslant k \leqslant n} x_k$, and $L(θ; x_1, \cdots, x_n)$ is decreasing with respect to $θ$ for $θ > \max\limits_{1 \leqslant k \leqslant n} x_k$. Thus the MLE of $θ$ is$$ \hat{θ}(X_1, \cdots, X_n) = \max_{1 \leqslant k \leqslant n} X_k. $$

Note that the density function of $\max\limits_{1 \leqslant k \leqslant n} X_k$ is$$ f_n(x; θ) = n (F(x; θ))^{n - 1} f(x; θ) = \frac{2n}{θ^{2n}} x^{2n - 1} I_{(0, θ)}(x), $$ thus$$ E_θ(\hat{θ}) = \int_0^θ x \cdot \frac{2n}{θ^{2n}} x^{2n - 1} \,\mathrm{d}x = \frac{2n}{2n + 1} θ. $$ Therefore, an unbiased estimator of $θ$ is $\displaystyle \frac{2n + 1}{2n} \max\limits_{1 \leqslant k \leqslant n} X_k$.